The Hot Aisle Logo
Fresh Thinking on IT Operations for 100,000 Industry Executives

We are all trying to fix the unfixable in the Data Centre. How do we get more cooling capacity from sites we built a decade ago for low power density applications?

One of the biggest problems is that everyone is trying to work against a basic law of physics; hot air rises, and cold air falls. Here’s the problem, a conventional data centre is designed to support servers that pull air in from the front and blow it out the back of the cabinet. (Except for the odd aberration from Cisco that blows the air side to side). So common sense tells us that computers at the top of the cabinet don’t get as much cold air as computers at the bottom.

Front to Back

Emerson Network Power supply cold aisle containment solutions, IBM offers water-cooled doors on the back of their racks (what a dumb idea that is) and data centre operators lay out their sites in hot and cold aisles. It is all about getting cold air onto the hotter and hotter processors in higher and higher density packages.

Keeping the hot air and cold air separate is important as Data Centre Air Conditioning (CRAC) units run at peak efficiency only when the return airflow is as hot as possible and high volumes of this hot of air passes the cooling coils. Trying to maintain hot and cold air in separate vertical aisles between racks is hard to achieve and requires a great deal of attention to detail.

Computer Room Air Conditioner

So why not turn the whole thing through 90 degrees and maintain horizontal separation between hot and cold air, put the hot on top and cold on the bottom? This aligns with the laws of physics and does not require any special procedures like cold aisle containment or maintenance of blanking panels.

Verari get it, their patented Vertical Cooling Technology draws air from the bottom of the cabinet and exhausts it through the top. Using a series of large and efficient cabinet level (not server level) fans cold air is pumped from the plenum through the cabinet at high velocity and exhausted at high temperature out of the top.

There is no opportunity for cold and hot air to mix. CRAC units are then able to operate at peak efficiency providing maximum capacity to the data centre.

Vertical Cooling

Turning the hot and cold aisles into a horizontal configuration has a number of significant efficiency benefits:

  • Better CRAC unit efficiency
  • Lower air handling energy (larger more efficient fan units)
  • Significantly better Power Usage Efficiency
  • More cores per KW of electricity
  • The front and back of the servers (blades) are unencumbered by the need for cooling fans and slots freeing space for connectors and indicators
  • There is no front and back so servers (blades) can be fitted into both doubling the number of cores per U of rack space.

There Are 14 Responses So Far. »

  1. This is not a good idea. Been there, done that.
    * the airflow inside a rack is very limited. The rack dimensions are 60×100, but this is mostly blocked by the servers and cables, this leaves very limited space for airflow. Cooling capacity=temperature*airflow. In the cold-warm isle, you have a complete door for airflow.
    * there is no separation between cold and warm air in the racks. This makes this cooling inefficient. Warm exhaust from servers will
    * the servers at the top of the racks will run very warm

    I think the best solution is cold-corridor, with blind plates. And whip all those switch and router people who should get some basic lessons in airflow. Yes that is you, Cisco, Foundry, HP.

    Check for example:
    http://www.minkels.com/getfile.php?id=409

  2. Thanks for an interesting reply that I **almost** agree with. If the servers are racked in the normal way – horizontally and we try and blow air vertically – bottom to top – then the airflow is rubbish and what you say is 100% correct. However Verari mount their servers horizontally on special frames that allow huge airflow and although the servers in the top level are warmer than those in the bottom they are well within specifications.

    Steve

  3. You're not right Sjoerd! Depends on the airpressure,temperature, nearly air-proof racks- means only 2 notches -top & bottom, position of the servers, airflow,…

    *quote* the airflow inside a rack is very limited.
    it depends Sjoerd, it depends. -check the airflow
    long experience in 60×100 racks, always nice results.

    turbobug
    GET IN,SIT DOWN, SHUT UP AND HOLD ON :-)

  4. Will they still use standaard servers? Standard servers pull cold air in on the front, blow it out on the back. Exactly how much room is there on the front of the servers? Serves are getting longer and with standard servers there simply is not alot of room left in the cabinet. When they will use blade servers, the situation will get worse I think. When they are using their own servers, it is not really a standaard all-purpose solution I think.

    I'd love to discuss this with you in front of a whiteboard. As you can guess, i disagree with you and I don't see this way of cooling doing 10KVA in a rack where the cold-corridor solution can do this without much problems.

  5. This is an interesting idea. A few questions immediately spring to mind:

    1) If one server breathes the air from another the air supplied to the first will need to be colder to maintain adequate conditions. ASHRAE “recommended” levels have lower recommendation as well as upper ones, and running the data centre extra cold may be less efficient too depending on the cooling system. Furthermore the owner operator is going to have to accept higher inlet conditions for some of his/her equipment… if they accept this, wouldn't a traditional data centre operated at this higher temperature offer similar / greater savings? It'd be good to see some figures for these temperature rises.

    2) How is redundancy / resilience etc. catered for in respect of the fans?

    3) If a server in the chain fails, or is under-going maintenance how is air supply provided to the remaining servers in the chain?

    It'd be good to see some diagrams of the inside of these cabinets to get a better idea of how they work.

    Regards,

    Stuart Hall
    Data Centre Simulation Specialist
    Arup

  6. Hi Stuart,

    Good questions. Here is the logic of how it works. Sometimes, the bottom 18 inches of the rack is open – either at the front only (caters for cold and hot aisle legacy layout) or both front and back if a Verari only solution or alternatively there is an open tile under the cabinet. The rest of the cabinet is closed except for the top where the hot air comes out.

    The rack is organized into three layers with two shelves on each layer one facing the front and one facing the back. (very high density). Above and below each layer there is a dedicated fan tray – so for three layers there are four fan trays.

    TRAY
    LAYER
    TRAY
    LAYER
    TRAY
    LAYER
    TRAY

    The servers are configured with the conventional “front” facing downwards towards the fan tray in a blade style organization. Even with all of these fans they use significantly less air movement power than having a fan in each blade or server. (This is because they are bigger and more efficient) The benefit of this arrangement is the air moves at a hell of a rate. In fact the rack pulls air from under the plenum assisting the fans in the CRAC units.

    So by moving the air faster densities of up to 34KW per rack can be achieved. As I am sure you know the rate of heat dissipation is proportional to both the difference between inlet and outlet temperature and the volume of air moved.

    There has been a ton of questions and interest in this article so I plan to do the job properly and create a proper diagram and explanation.

    Regards

    Steve

  7. Something Stuart typed and your schematic caught my attention.

    Suppose cold inlet = 18 degrees
    Suppose warm outlet = 27 degrees

    top fan tray 27 degrees
    layer
    fan tray 24 degrees
    layer
    fan tray 21 degrees
    layer
    bottom fan tray 18 degrees

    Now your delta-T over the servers is only 3 degrees. You need to move alot of air through them to keep them cool. Also I think special serves is kinda cheating btw, but that is different issue. A general purpose solution which accepts servers from different type and brand is more nice.

    The inlet of the layer is 24 degrees. If your servers accept 24 degrees, I'd go with the cold corridor concept and have a very green datacentre. WIth 24 degrees, you will have alot more days you can use free cooling.

    cold isle 24 degrees -> server -> warm isle 33 degrees

    regards,
    Sjoerd

  8. Maybe we're having the same opinion, we'll see.
    Room temperature between +64°F and approx +72°F,
    19″ rack, open at top and bottom, 4×20 inch notch x 2 top x2 bottom
    one air stream pulls cold air for the front side of the servers,
    the other air stream pulls air on the backside (like a cold-air corridor),
    every server pulls air on the front side and blows out at the back side
    if you give “them” enough cold air for the front side then you don't have to worry about the back side.
    the rack fan will do it's job, the server fan too.
    i've never seen a rack with 8 or 12 servers where the highest server collapsed because of teperature problems.
    The servers never can produce more warm air as you can pull cold air in the rack- hypothesize your air con works fine.

    Looks like you don't believe that a aircooled Porsche could run in Arizona, he can.

    Warm regards
    Henry (turbobug)
    -get in-sit down-shut up & hold on :-)

  9. Sjoerd,

    You got it – Verari cheat by turning the hot and cold aisles through 90 degrees and standing the servers on their head, blowing upwards. That is the whole point of my article – and why I think it is so worthy of note. Actually I really think the whole data center basic design that pretty much everyone adopts is ill conceived. If we started with a clean sheet of paper we would never get to a current data center.

    I spoke to Dave Driggers, Verari's CTO earlier today and he has offered a more detail on the airflow details, like cubic feet per minute calculations and inlet versus outlet temperatures.

    Thanks for your contributions

    Steve

  10. Henry,

    Try putting 30KW in an air cooled rack and watch for the smoke :-)

    Straight front to back cooling just won't support that density without something significant like cold aisle containment or vertical cooling as per Verari.

    I am sure Stuart from Arup has the CFD calculations to prove the point.

    Thanks for your contribution.

    Steve

  11. Sjoerd,

    I really agree strongly with your comment about folks who run cooling side to side rather than front to back or bottom to top in special cabinets. I wonder if anyone from Cisco would like to comment?

    I wrote an article some time ago with some advice on how to house high capacity Cisco switches:

    http://www.thehotaisle.com/2008/07/07/how-to-no

    Steve

  12. Steve and Sjoerd,

    i guess we all think nearly the same but in different ways.
    Recapitulate oversimplify
    -An open tile, a hermetically closed rack – top and bottom are open,
    -4 big fans, blowing and radial (500-5000m³ air /h ) 2vertical cold aisle, very effectual.

    The Verari pic shows it the best way i think (vertical cooling, great thing) very forward- looking, much better as only left-to-right-cooling
    http://www.thehotaisle.com/wp-content/crac.png

    For a big Sun M8000series we use a additional 800m³/h radial fan system,
    and for a 30kW i would take a 2000m³/h four fan system.
    It's noisy but we've cute neighbors :-)

    Thanks for your attention

    Henry (turbobug)
    -get in-sit down-shut up & hold on :-)

    Steve-thehotaisle- thanks for this informative discussion board

  13. This option makes sense when you are starting from scratch and can buy equipment from one vendor that is willing to design all of their equipment to the same design convention. But reality dictates that most data centers have been operating for a few years, contain a wide variety and are under the control of people that don't care about supply/return separation. In these cases, simulation is a good means to take back as much control as possible. The simulation models can show the root causes of hot and cold air mixing and help communicate the problem benefits to investing in fixes to other people.

  14. [...] Read the entire blog entry here >> All views and opinions expressed in ESG blog posts are intended to be those of the post's author and do not necessarily reflect the views of Enterprise Strategy Group, Inc., or its clients. ESG bloggers do not and will not engage in any form of paid-for blogging. Click to see our complete Disclosure Policy. For important information about using this content, please review our Terms & Conditions Tags: air conditioning, airflow, cabinet, cold air, crac unit, data center, electricity, feature, green IT, hot air, maintenance, power, Servers [...]

Post a Response