The Hot Aisle Logo
Fresh Thinking on IT Operations for 100,000 Industry Executives

Over the last few months I have been thinking about data centers, how we power and cool them and what we put in them. I’ve been looking at what the very largest consumers of IT infrastructure do with their data centers compared to industry norms. The first take away is that almost none of them conform to industry norms. As an example, almost none of them use refrigeration, ac power or conventional enterprise systems.

Try and find a SAN, Blade Servers, or 10G Ethernet in a Google Data Center – Google deliberately use commodity parts. You won’t find conventional CRAC chillers in the most recent Yahoo sites. Microsoft is building it’s newest data centers from containers made by Verari Systems.

Google have tried to coin a new phrase to describe what they do – The Warehouse Sized Computer. It makes some sense as a description for Google as their approach is to ruthlessly take out costs, both capital and operational. So they design data centers from the ground up missing out the stuff they don’t need. Missing out stuff from the data center mechanical and electrical systems and missing out stuff from their computing, network and storage platforms. In fact it is not such as good a generic category name for normal enterprises that want to follow the same path.

I think we can categorize the phases that the most advanced enterprizes have gone through are:

  • Server Scale Computing – each computer is self contained with it’s own power unit, self contained cooling, enclosed in it’s own cabinet.
  • Blade Scale Computing – computers are delivered in small batches with some shared components such as cooling and power modules in a shared cabinet.
  • Data Center Scale Computing – computers are delivered at extreme scale with exclusively shared components.  DC Power, shared cooling, typically in a specialist large cabinet or container.

My sense is that the largest scale consumers of IT infrastructure are breaking away from the pack, abandoning pizza box servers and blade servers, moving towards clustered storage and commodity components, turning away from the hierarchical network designs where most traffic traverses the core network at ultra high speeds (with ultra high costs to match) towards leveraging software as the cornerstone of relaibility and scale.

There Are 15 Responses So Far. »

  1. Agreed. This makes a lot of sense. I think you're wrong about moving away from pizza box servers. The reality is that everyone is moving towards these since the 1U pizza box is the embodiment of 'commodity server'. Google is an extreme example where they have removed the box and just use the pizza. But it's still a stack of pizzas in a rack.

  2. My understanding of Google was that they did use AC distribution, the secret sauce was in the server power supply which had it's own UPS battery) and the use of a single 12V output on the power supply with the motherboard doing any voltage stepdown for 5V or 3.3V or whatever is needed.

  3. Randy, you are confusing the box with the server! Lets talk motherboards rather than servers. I see a server as being a self-contained unit with motherboard, power supply, box, fans, and rails to attach to a rack. Server-Scale computing.

    Blade-scale goes one better with some shared components, perhaps larger fans and shared Power supplies. A larger cabinet.

    Data Center Scale throws away the single boxes and fans and power supplies and scales them up. Great big efficient fans, huge DC power units. Better airflow. Maybe container sized like Microsoft with their Verari FOREST containers.

    That is what is different and the word server confuses our thinking hence my new categories.

    Steve

  4. Nik,

    Google do the same as BT do in their 21st Century Data Centers. Local rack based DC power supplies. Yes they get AC delivered to make the 12V but that AC is unprotected. Google have ripped out the UPS and battery room stage, delivering low cost data center scale DC power at the rack level. That takes hundreds of millions of dollars out of their data center capital costs and improves both PUE and reliability at the same time.

    Google go AC – DC stop.

    20th Century Data Centers go AC – DC – AC – DC because they have a server-scale power supply and need protected ac. Just bonkers in my humble opinion – and I have been saying so for years.

    Steve

  5. It's a po-tah-toes, pa-tah-toes thing. I understand what you are saying. I'm simply asserting that the notion of the pizza box going away is overstated, not your entire argument. I agree with it in principle, but I've seen lots of datacenters and the pizza box is still dominant. Will you run DC power to it? Probably. Will you containerize them? Sure. The pizza box design is far too useful for maintenance purposes. Google will always be an outlier here.

    Pizza boxes aren't going away, although they are likely to change.

  6. Randy,

    Lets just agree to disagree. None of the big boys use server-scale computers anymore. Pizza box servers don't deliver the degree of efficiency that folks need when they have tens of thousands of computing elements in a data center. Why have tens of thousands of power supplies when you can have tens and whu use server scale fans which are inneffficient when you can have industrial scale ones?

    Steve

  7. Steve, that two stage conversion must be very inefficent. Are there any statistics that show how much power saving can be made by skipping a conversion step as google have?

  8. @stephenodonnell has a good smash em up debate under way on the concept of “data center scale computing” http://tinyurl.com/mhxyur

  9. Hi Martin

    Good question and one that there has been much debate about on this site. Small switched mode power units such as those used in servers are actually not too bad, achieving between 92% and 96% efficiency at full load. Remember that they are never run at full load as to provide resillience the PSUs can never get past 50% to work in an N+N pair. Sometimes there are three or four PSUs in a blade chassis and these can get to 60% or 70% loading.

    Large scale DC power supplies can achieve very good efficiencies but one could argue about the 2-3% difference as being too insignificant to measure (frankly I agree). The real saving in capital and operational costs is in missing out the UPS stage. The efficency loss of converting from DC off the batteries back to AC again can be as much as 15%. Remember that UPS kit also has to be run in resillient configuration of N+N or N+1 so cannot run at full efficency and full load.

    Thanks

    Steve

  10. Thanks Steve – 15% is a big difference.

  11. OK, Stephen. Can you provide your sources? Last I heard everyone but Google was still deploying commodity pizza boxes with DC power. AFAIK, once you're running DC in the pizza box you see ~20% heat savings and the fans in the boxes themselves are efficient enough at that point.

    If I'm incorrect I would love to know, but it would help to understand where such a strong assertion comes from.

  12. Google's book – The Data Center as Computer

    Abstract

    As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co- located servers. Large portions of the hardware and software resources
    in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC). We describe the architecture of WSCs the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. We hope it
    will be useful to architects and programmers of today's WSCs, as well as those of future many-core platforms which may one day implement the equivalent of today's WSCs on a single board.

    Full text available on-line

    http://dx.doi.org/10.2200/S00193ED1V01Y200905CA

    Steve

  13. Have you all taken a look at offerings like Rackable/SGI's Cloud Rack? I think it's another progression of what you're talking about here. Also, I specifically know a start up that is explicitly disrupting the current impediment to further form factor contraction: the mother board. They actually have been able to disrupt the cost of the redundant components of the motherboard. I think Moore's law will apply to server component density and power efficiency and we'll see a lot of very interesting things over the next five years.

  14. Jake – why don't you share this very interesting idea about Motherboard designs with the other readers of the hot aisle.

    Steve

  15. Very interesting Steve, and in terms of shipping containers I think about it like putting the computers in the CRAC. Putting the heat rejection close to the load for better efficiency etc …

Post a Response