The Hot Aisle Logo
Fresh Thinking on IT Operations for 100,000 Industry Executives

Recently The Hot Aisle conducted an online survey asking the question -

“How do you currently choose where to place equipment in your data centre?”

As usual we got a very large response to the survey with 5,580 respondants.

image001

The results are in and it shows quite clearly that equipment placement is (to say the least) unsophisticated. A signifiant number of Enterprises are running at significant risk:

  • Overloading PDUs – causing the potential of cascade failure
  • Lost resillience
  • Poor power capacity utilisation

There is obviously a gap in the market for more sophisticated rack planning systems that actually measure and monitor the real critical parameters and give operators the tools to maximize packing density whilst avoiding overload risk.

  • Pingback: Tweets that mention Server placement survey uncovers cascade failure risk in Enterprise Data Centres | The Hot Aisle -- Topsy.com

  • http://www.adinfa.com Philip

    Steve,
    This is very interesting data you have uncovered. And, yes, I agree that there is a gap in the market (which is why my company is addressing it!)
    Did you notice any geographic variations across your respondents or is this pretty much a consisten picture wherever the companies are located?
    Thanks,
    Philip

  • http://thehotaisle.com Steve O’Donnell

    Philip,

    The Hot Aisle polls are unsophisticated. They prevent the same person voting more than once but don’t record location. Tell us how you resolve the problems in the article?

    Steve

  • http://www.adinfa.com Philip

    Steve,

    If I have understood things from the poll correctly, the picture is that most organisations do not have any form of real-time monitoring of where the power is going, particularly down to the rack level, and limited visibility of temperature.

    I believe that discussions we are having with data centre managers complement your findings. They tell us that real-time monitoring of power and environmentals down to the rack level is becoming critical to managing:
    - service reliability and quality
    - capacity
    - cost

    Yet they are struggling to find affordable, usable tools to help them do this. Typically, they are taking intermittent, irregular manual readings from a few meters. This tells them little.

    Our InSite software product addresses this by:
    - monitoring any IP-addressable, metered power strips, environmental sensors, main PDU meters, UPS etc
    - dashboarding information onto floorplans, schematics etc with real-time feeds to icons
    - providing interactive, real-time reporting
    - taking automated actions, including alerts on threshold breaches.

    If any of your readers are interested in discussing this subject further to see if we can help them, I would be delighted to hear from them.

    Thanks,
    Philip

  • http://thehotaisle.com Steve O’Donnell

    Thanks Philip

    This is a very important issue. For Enterprises there is a conflict between data center efficiency – fitting it all in – and reliability. Getting the balance right without tooling is actually impossible.

    Organizations that push the boundaries too far risk a catastrophic failure that could take days to unravel (burned out PDUs and Bus Bars). Organisations that don’t fit everything in won’t maximize their capital efficiency.

    Anyone else seen any good solutions?

    Steve

  • http://www.wiredre.com Everett Thompson

    Excellent input. We’ve seen some decent tools lately, but they’ve been too expensive…we are still looking. Toda, we build custom spreadsheets for our clients.

    Best,

    Everett

  • http://www.atov.com Pete Williams

    Apologies for late comments on some of your blogs Steve, I’m just catching up with your most recent ones.

    I developed and implemented a tool while working under Steve (client will remain anonymous unless Steve is happy for me to say who, but they were quite a signifficant company) that helped give information to the Space Management Board for planning the rollout of new equipment.

    It was integrated with the client CMDB and a 3rd party visualisation tool.

    It first took a feed of the whole client estate and the physical location of each device, from Site, Floor, Room, Tile, Rack to tin (including make & model, and more importantly – Power Demand and Heat Output!), even application and service (will explain why later).

    We then ran an audit of M&E infrastructure on key data centres to establish linkages between transformers, UPS’, PDU’s, ACU’s and the total capacity of each.

    An audit was then ran to store which PDUs were attached to which tins.

    Can you see where I’m going with this?

    After the pain of initial data collection, we were able to view the ‘theoretical’ power demand on each PDU, UPS, Transformer, Room, Site, Cabinet and so on because we already knew the physical location, hence knowing the power utilisation of pretty much every M&E device and their available capacity.

    Not only did we know power availability and capacity, but we could highlight hot and cold spots.

    The tool enabled the client to run all kinds of reports:

    1) Where do we have ‘physical space’
    2) Where do we have ‘available power’
    3) Where do we have ‘available cooling capacity’
    4) Where do we have dangerously high loading on either power or cooling?
    5) Do we have any business critical applications which fall under and high risk spots
    6) Do I have an kit which is currently only single or dual fed kit?
    7) Do I have any business critical applications on kit which is only single fed??

    The list was endless and gave the client and space management board an invaluable tool on which to make critical business decisions, aid planning, and reduce risk.

    The one small flaw with this is that the power demand of kit was theoretical. We got round this by factoring the manufacturers stated power consumption by 70%, which is more realistic, but not an exact science.

    We did move one step further in this process towards the end of my time with said client, where we introduced iPDUs. These gave a real time reading of power consumption at PDU level, and as long as we knew the relationship between tin and iPDU, we could provide even more accurante reports for the client where the iPDUs had been introduced.

  • http://viridity.com Michael Rowan

    Steve,

    I was watching this poll back in November/December but missed your comments.

    We have seen the same thing in our customers as your survey shows. What’s interesting here is the power draw and heat in todays servers are so directly tied to utilization, which is low in most places, but variable over time — some servers can double or more power consumption as a result of increased utilization! As a result, many IT managers are forced to place equipment by exaggerated specifications from vendors, instantaneous or average current draw (via rack PDU’s or rack sensors), or by current temperature.

    The problem is that these are all just instantaneous readings and measurements, they don’t account for the combination of what the average draw is for given equipment AND the potential draw of the same equipment. So the rack that seems quite neat for a piece of equipment today can be a disaster tomorrow.

    We are coming at this from both sides — using a top-down power monitoring of the actual IT equipment (servers, virtual servers, storage arrays, networks) and anchoring that monitoring with data from the physical infrastructure when available (rack PDU, BCM, etc). It gives us the ability to see long term how equipment is drawing power (and what it’s actual capabilities are based on configuration) as well as seeing how applications are responsible for that power draw (which can gets really interesting both in organizations that want to do real charge-back or are starting to embrace application mobility.

    There are a number of ways to skin the cat, but placement is a sophisticated task that requires consideration from more than just vendor faceplate, instant PDU readings, or how hot a rack seems. (Of course, I’ve been in a lot of data centers that just hunt for a open slot, so … ;-)

    Thanks for the data Steve. Awesome stuff.