The Hot Aisle Logo
Fresh Thinking on IT Operations for 100,000 Industry Executives

I spend a lot of time thinking about data centers and how to make them better, more efficient, more reliable, higher performance , easier to maintain and cheaper to run. Lots of very smart and experienced people have been doing the same for many years and whilst there is no single, agreed best practice, model data center design there aren’t too many options for the mythical perfect 21st Century Data Center. However none of these data center designs are optimal, none of them are even close to being good, actually in my humble opinion they are all pretty weak.

So why have so many really smart people singularly failed to design a good data center? It’s not because they are not smart enough, or inventive or motivated to do a good job? Well no, it’s actually because of the badly designed equipment that data center people have to host in their sites.  Does that sound like a weak excuse?

Why do I think that the equipment we put into our data centers is badly designed?

In one way, if we look at individual pieces of equipment in isolation, they are actually extremely well designed. High performance servers are close to optimal as individual servers, it’s just that when you put hundreds or thousands of them into a data center they cause problems. Big problems, problems like cooling, energy efficiency, connectivity, maintenance, reliability and they cost too much. So my point is that the equipment we deploy into a data center is not optimally designed to be put into the data center in scale.

So here is the big question. If we plan to deploy hundreds or thousands of servers into our data center why do we package each separately, with it’s own power supply units (yes they all have more than one for resilience) fans, and enclosures? Even blade servers are packaged that way, each blade has it’s own fans and enclosure.

Data Centers are designed to operate with all of these singular systems. Designed to be flexible and able to work with the least optimally designed systems. So Data Centers have Air Conditioning units, raised floors, AC power delivery, hot and cold aisles, and huge and massive resilience and redundancy. We chill water and use it to cool air, we then have to deal with humidity levels and condensation. We convert power from AC to DC and then back to AC and then crazily back to DC. A Data Center psychoanalyst (were there such a person) would diagnose simultaneous and severe schizophrenia  and paranoia.

If we designed the packaging for our data centers differently, by centralizing the supply of DC power we could cut out the need for UPS, server level power supply units and huge amounts of complexity and cost.

If we centralized cooling better and took the responsibility for managing airflow away from the singular server we might deliver the air handling at a rack or container level (perhaps even with local cooling coils), or we might actually choose to liquid cool high power equipment directly cutting out the inefficiency of cooling via air completely.

So what do you think? Are our data centers and data center equipment designed well? In isolation perhaps, but as a complete end to end system they leave a lot to be desired. We could reduce cost, improve efficiency, improve reliability all by doing some joined up design.

Google claim to have reached incredible levels of power efficiency in their most modern data centers, getting down to PUE levels of 1.2 or so. How was this done? Joined up thinking and really working to deliver data center designs at scale. If we start from where we are today we will fail.

There Are 4 Responses So Far. »

  1. Steve,

    I think basically data centre design fails because each discipline does not know what the other disciplines are doing and each is going his merry way regardless. For instance we are seeing every greater equipment densities which need an ever greater amount of cooling, why ? I think the economises have changed so that the cost of a data centre in £ per m² is not the important cost it’s the running costs so if you buy a big shed type building and space the kit out so its easier and more economical to cool. Why try to put more and more kit into small space ? It not going to make things more efficient .

    As I read the EU code of conduct on data centre efficiency, in pacticular the bit that says all new IT to be ETSI compliant by 2012. I wonder if anyone who is in a position to effect change are looking to the future and if so why are people still building data centre dinosaurs !

    Cheers

    Steve C

  2. Steve,

    I think basically data centre design fails because each discipline does not know what the other disciplines are doing and each is going his merry way regardless. For instance we are seeing every greater equipment densities which need an ever greater amount of cooling, why ? I think the economises have changed so that the cost of a data centre in £ per m² is not the important cost it’s the running costs so if you buy a big shed type building and space the kit out so its easier and more economical to cool. Why try to put more and more kit into small space ? It not going to make things more efficient .

    As I read the EU code of conduct on data centre efficiency, in pacticular the bit that says all new IT to be ETSI compliant by 2012. I wonder if anyone who is in a position to effect change are looking to the future and if so why are people still building data centre dinosaurs !

    Cheers

    Steve C

  3. Don't get me started on the airflow in alot of switching and routing equipment. With servers at least we work with a cold and warm isle which is kinda efficient,pulling cold air in at the front and blowing it out on the back. Who did ever come up with a side to side airflow. And why are some switche left-right and some right-left. Aargh. Do the designers have any idea that their equipment is 19″ because it will be put in a cabinet?

  4. [...] Read the entire blog entry here >> All views and opinions expressed in ESG blog posts are intended to be those of the post's author and do not necessarily reflect the views of Enterprise Strategy Group, Inc., or its clients. ESG bloggers do not and will not engage in any form of paid-for blogging. Click to see our complete Disclosure Policy. For important information about using this content, please review our Terms & Conditions Tags: ac power, airflow, articles, data center, feature, IT Operations, maintenance, opinion, problems, raised floor, SAN, Servers [...]

Post a Response