The Hot Aisle Logo
Fresh Thinking on IT Operations for 100,000 Industry Executives

I am old enough to remember the 1960’s when IBM Mainframes used de-ionized water delivered by micro-bore pipes to cool the CPUs. (In fact I remember a spillage during a mainframe move that resulted in every single auto spares shop in south east England being raided for deionized water).

In a recent statement IBM claim that direct water could also be the cooling technology of the future. IBM say that removing excess heat from data centers is as much as 4000 times more efficient via water than it is by air. This revitalized cooling technology, was reintroduced in 2005, when IBM launched the Cool Blue water-cooled products.

There are two basic approaches that get cold water closer to the case or heat-sinks of the hottest components (generally the CPUs) in the first the IBM technology uses cold plates physically bolted to the CPUs with centrally delivered chilled water channeled through them. This removes heat efficiently but is mainly targeted at getting the case temperature down so that the Power 6 chips can be over-clocked reliably delivering more performance for the same basic silicon.

The second approach uses cold water delivery to coils in a rear door (Emerson deliver a similar product). The IBM’s Rear Door Heat eXchanger is 4 inches thick and weighs in at a hefty 70 lbs. IBM claim that the door can absorb as much as 50% of the heat coming from the server rack.

The novice would say that water and electronics don’t mix well but experienced data center managers know that water is the main medium that is used in Computer Room Air Conditioners (CRAC) units to cool the raised floor area. Bringing water closer to the CPU core will improve the efficiency of heat transfer by as much as 4000 times. So in summary if we get the liquid right up against the CPUs we need to pump 4000 times less water than air to get the same value of cooling. IBM’s Cool Blue doors are significantly more efficient than the CRAC units in the same room as they are closer to the source of heat but crucially they have a negative effect on the CRAC units they are designed to supplement.

CRAC units operate most efficiently when the return airflow is as hot as possible, Cool Blue doors cool the hot airflow and therefore reduce the efficiency of the rest of the room. Shame that the doors were not fitted to the front of the cabinet where they could ensure uniformly cold air being drawn into the front of each server.

“We’ll see a return to liquid-cooling in most IT solutions,” said Doug Neilson, a consultant in IBM’s systems and technology group, at the IDC Enterprise Data Centre conference in London. The technique, once used in specialised supercomputers and large mainframes, could now have more general use as companies try to become more energy efficient, he said.

“Water-cooling is a better way to recycle the heat energy,” said Neilson. “If you cool by air it is much harder to capture and condense it.” To harness that energy, it must be transferred to liquid, so it is more efficient to use liquid cooling directly, he said.

I really agree with IBM and am certain that liquid cooling has such a significant advantage over air cooling that the economics and density advantages will be overwhelming. Handling water in a data center is business as usual and we must not be frightened of it. The equipment vendors need to get more joined up with the data center M&E guys and start looking at delivering cooling at the chip level for all high performance, high density systems.

The advantages could be overwhelming:

  • No refrigeration needed, even in the hottest climates free air cooling could be used as input liquid temperatures of 130 degrees F (50 degrees C) could be used
  • Waste heat could be delivered at useful temperatures like 160 degrees F (65 degrees C)
  • Pump energy could be minimized enabling PUE levels of 1.05 or better to be achieved
  • Data Centers could be silent as there would be no need for fans
  • New equipment could be thermally neutral, not adding any extra heat load to the site
  • No humidity problem, no humidifiers
  • No need for a raised floor
  • Massive improvement in reliability due to thermal stability

Liquid cooling falls into the joined up thinking that The Hot Aisle promotes. Well done IBM.

There Are 18 Responses So Far. »

  1. @stephenodonnell RT http://bit.ly/fh3Bb the future of data center cooling…..H2O!!!! Crew cuts and hot rods next…

  2. what is the temp of the supply chilled water to cool the servers ?
    if the temp is 55 degrees with water there is no difference with 55 degree air because the server internal fans are a constant cfm for any type of cooling system.
    all the IBM system is doing is capturing the waste heat,the cooling has already occured so the only thing left to do is remove heat to the exterior.
    if heat removal is through a refrigerant cooling cycle then there is no reduction in tonnage.
    reduction of tonnage is the only metric that will be meaningful in a green data center.

  3. Hi aelarsen

    IBM's cooled back doors are really just a way to add additional CRAC capacity to an existing data center with poor airflow and a need to install high density equipment. You are entirely correct that they are not particularly green or efficient and I commented in the article about the negative impact of delivering cooled exhaust air back to the main room CRAC units.

    IBM's cold plate technology uses chilled water (sub 20 degrees C) that must be maintained within very tight tolerances to keep the CPU cases cold enough to over-clock. There is a reduction in the energy costs of moving air in this instance as pumping water is much more efficient from a heat load perspective than pumping air.

    My comment about being able to use free air cooled water (at up to 50 degrees C) refers to a possible future technology where we are not trying to over-clock the CPUs and are more interested in cooling very high power components efficiently. Tcase for most CPUs is in the range of 60 degrees C so using 50 degree C water would be possible if the thermal impedance between the case and water could be reduced sufficiently. Then we are talking about HUGE energy savings.

    Steve

  4. All the above is true, liquid cooling can be the most efficient if designed right. There are several important business issues keeping liquid cooling out of the mainstream, and these won't go away soon. For newly designed sites that are planning on 15-30KW racks, liquid is the technology of choice but for most others the following still very much applies.

    1. Redundancy is much more problematic and expensive with liquid than air. An extra CRAC or two in the data center can be used as a backup for a fairly wide expanse of racks. How do you get cooling redundancy for a 20KW liquid cooled rack? Not impossible but very expensive.

    2. Cost of equipment is significantly higher, at least today. Rather than a Single CRAC servicing 20-30 racks, the liquid solutions are most always one per rack. Even with densification of the racks, the costs of liquid cooling equipment inside the data center are many times that of air. The costs outside the data center are nearly equal. Of course with wider adoption and standardization the costs will come down – but when. With bleeding edge IT technology changing so fast it's a brave soul who can bet on a new technology to be around or even supported in 5-10 years, add to this the fact it's much more expensive and it's no wonder liquid is not yet mainstream.

    3. How many data centers really have a “SPACE” problem? With the 6-10KW racks of the typical data center replacing the traditional 1-3KW racks, very few sites run out of space before they are up against serious power and plant cooling constraints. 6-10KW is easily handled with traditional cooling solutions and proper airflow design.

    Comments?

    Wally Phelps
    http://www.adaptivcool.com

  5. Well it so happens that there is a company in Wisconsin that is able to cool a 150Kw load while maintaining a cold isle inlet temp of less than 90 degrees using 65 Degree water and up to 450U of space in less than 64 sq ft. of floor space and 9 foot high.

  6. the key in cooling is heat removal.regardless of system chosen the return of heat to the cooling cycle increases the delta t across the cooling coil.the higher the delta t the more tonnage required.
    I have a patent pending system that provides cooling,containment and segregation of heat all with air curtains.the waste heat is dissipated through heat stratification housings before returning to the cooling cycle.this results in a lower delta t and that means less tonnage.
    cooling is heat removal plain and simple

  7. James

    Sounds interesting – tell us more.

    Steve

  8. Here is a company in the bay area that I’m a big “fan” of – http://www.clusteredsystems.com/
    Their solution is a hybrid of the two approachs mentioned in your article.

  9. I'm a big “fan” of Clustered Systems. Their solution is a hybrid of the two approaches you mentioned in your post.

  10. visit my blog @ data center journal “green data center cooling”.
    take a look.

  11. Is clustered systems technology different from spray cool? i.e. heat removal from the chip itself.

    Fans take an inordinate amount of power, both in the server and cooling units. If there is a cheap, reliable and redundant way to extract heat right out of the chip, the fans can be eliminated. So far this has been elusive but at some point rack densities will increase to where it makes financial sense. It's anybody's guess where this point is, possibly the 60-100KW range?

  12. kw/ton for chillers is 6 to 7 times more than fans.
    if you want to save $ concentrate on reducing tonnage.

  13. I'd like to compare data on fan power vs cooling power. You can contact me at wally.phelps@degreec.com

  14. Clustered Systems take an approach of inserting cold plates between 1RU servers and building up the CPU heat-sinks on the motherboards so that the case (which is cooled by the cold plate) removes heat. Similar in concept to the IBM technology that attaches cold plates directly to the CPU core. Cluster Systems also disable the server fans.

    Steve

  15. […] want to subscribe to the RSS feed for updates on this topic.Powered by WP Greet BoxCheck out this post by Steve O’Donnell talking about the next wave in data center cooling technology – […]

  16. […] PostsWhy your humidification plant is wasting electricity (1)IBM claim that water cooled servers are the future of IT at scale (12)Cleansource UPS – No batteries – Just a flywheel (1)Standing in the cold aisle (0)Verari […]

  17. […] Read the entire blog entry here >> All views and opinions expressed in ESG blog posts are intended to be those of the post's author and do not necessarily reflect the views of Enterprise Strategy Group, Inc., or its clients. ESG bloggers do not and will not engage in any form of paid-for blogging. Click to see our complete Disclosure Policy. For important information about using this content, please review our Terms & Conditions Tags: cabinet, cooling, data center, feature, heat exchanger, IT Operations, m&e, moves, power, pue, raised floor, refrigeration, temperature […]

  18. […] out this post by Steve O’Donnell talking about the next wave in data center cooling technology – […]

Post a Response