I spend a lot of time thinking about data centers and how to make them better, more efficient, more reliable, higher performance , easier to maintain and cheaper to run. Lots of very smart and experienced people have been doing the same for many years and whilst there is no single, agreed best practice, model data center design there aren’t too many options for the mythical perfect 21st Century Data Center. However none of these data center designs are optimal, none of them are even close to being good, actually in my humble opinion they are all pretty weak.
So why have so many really smart people singularly failed to design a good data center? It’s not because they are not smart enough, or inventive or motivated to do a good job? Well no, it’s actually because of the badly designed equipment that data center people have to host in their sites. Does that sound like a weak excuse?
Why do I think that the equipment we put into our data centers is badly designed?
In one way, if we look at individual pieces of equipment in isolation, they are actually extremely well designed. High performance servers are close to optimal as individual servers, it’s just that when you put hundreds or thousands of them into a data center they cause problems. Big problems, problems like cooling, energy efficiency, connectivity, maintenance, reliability and they cost too much. So my point is that the equipment we deploy into a data center is not optimally designed to be put into the data center in scale.
So here is the big question. If we plan to deploy hundreds or thousands of servers into our data center why do we package each separately, with it’s own power supply units (yes they all have more than one for resilience) fans, and enclosures? Even blade servers are packaged that way, each blade has it’s own fans and enclosure.
Data Centers are designed to operate with all of these singular systems. Designed to be flexible and able to work with the least optimally designed systems. So Data Centers have Air Conditioning units, raised floors, AC power delivery, hot and cold aisles, and huge and massive resilience and redundancy. We chill water and use it to cool air, we then have to deal with humidity levels and condensation. We convert power from AC to DC and then back to AC and then crazily back to DC. A Data Center psychoanalyst (were there such a person) would diagnose simultaneous and severe schizophrenia and paranoia.
If we designed the packaging for our data centers differently, by centralizing the supply of DC power we could cut out the need for UPS, server level power supply units and huge amounts of complexity and cost.
If we centralized cooling better and took the responsibility for managing airflow away from the singular server we might deliver the air handling at a rack or container level (perhaps even with local cooling coils), or we might actually choose to liquid cool high power equipment directly cutting out the inefficiency of cooling via air completely.
So what do you think? Are our data centers and data center equipment designed well? In isolation perhaps, but as a complete end to end system they leave a lot to be desired. We could reduce cost, improve efficiency, improve reliability all by doing some joined up design.
Google claim to have reached incredible levels of power efficiency in their most modern data centers, getting down to PUE levels of 1.2 or so. How was this done? Joined up thinking and really working to deliver data center designs at scale. If we start from where we are today we will fail.