The Hot Aisle Logo
Fresh Thinking on IT Operations for 100,000 Industry Executives

Interview with Haroon Malik – DCS Magazine

bart

 

Malik – Do you think that, as yet, many businesses have a sufficient understanding of the importance of the human aspect of IT within the data centre? ie writing policies, managing infrastructure, making mistakes etc…

O’Donnell – It may be a bit unkind to some of my colleagues but my experience shows that many IT Organisations are run by unreformed techies who believe that their job is to successfully roll out new products and functionality. In my experience this is only a tiny piece of the picture. There is the obvious people, process and technology mantra that many repeat but few deliver on, but even that misses point. Technology for technologies sake destroys business value; it dissipates cash, management time and human resources.

IT and other technologies are only valuable where they provide real business advantage and help our organisations destroy the competition or better serve customers. When we in technology deliver unique competitive advantage to our business folks IT is at it’s strongest. Competitive advantage driven by technological advantage is often transitional as competitors follow quickly. When competitive advantage turns into table stakes to stay in the market, then IT had better be cheap and relentlessly reliable.

So can technology on it’s own deliver business advantage? I have never seen that happen, successful business projects with IT components are always aligned with our business processes, people, customers and suppliers. Miss out the people piece and even projects designed to beat the competition will fail!

Malik – To what extent is it/will it be possible to remove this human element, thanks to developments in IT technology – especially management software?

O’Donnell – The simple answer is it never will be possible. People are always at the centre of business, either as suppliers, customers or internal stakeholders. It is possible to reduce costs, reduce cycle times for delivery and improve reliability (right first time) by automation but people will always be there to mess up our best laid plans.

For IT infrastructure delivery and operation, automation tools are becoming table stakes. Distributed systems at scale are at the centre of many businesses and we talk about so many racks of blades for this function and so many hundred servers and network ports for that application. The only way to manage that scale of operation is with automation, put people in to do it and watch the chaos.

The two key measures I use to determine organisational effectiveness are cycle time (how long does it take to, serve a customer, bare metal build and deploy an application, fix a problem) and right first time (how many times do we have to apologise and serve the customer again (if they let us), rebuild an application and rework a badly implemented fix). Properly applied, automation can be a great tool in improving RFT and CT measures.

Malik – Put it another way, how much automation within the data centre is a good thing?

O’Donnell – In my opinion it is binary, if the automation is well designed and helps us deliver business advantage, then we should bring it on. Poor automation (for the sake of it – toys for the boys) that does not improve cycle times and requires rework is just stupid.

Malik – Virtualisation – one compute/processing pool, one storage pool – is it that simple?

O’Donnell – If one automates a mess one improves the time to meltdown, if one virtualises a mess there is only limited advantage. Automation and virtualisation can only be successful in organisations that apply the power of one.

What do I mean by the power of one? Ruthless focus on standardisation, total attention to detail to get all systems patched to the same levels, to use the same versions of software and directory structures. Exposure of the relative costs and reliability and cycle time improvements of standardisation to our business colleagues helps us to engage them in the process.

Virtualisation (compute, network and storage) helps us separate our application components from the underlying infrastructure. This helps us drive availability in a world where only the most modern equipment can offer cost effective services, Virtualised platforms enable us to change infrastructure services transparently and provide great flexibility in operations.

Malik – Or does server/storage virtualisation then demand network virtualisation and place other strains on the data centre’s IT infrastructure?

O’Donnell – There is a lot of talk about current IT platforms having high power densities, racks of blades, large storage arrays, high performance, high packing density network equipment can cause power and cooling issues for older sites. Without a clear understanding of data centre economics many Data Centre managers worry about this unnecessarily.

The 20th Century way of looking at Data Centres was all around how much effective area can be delivered for computing systems. Packing densities were low (less than 500W / m2) and we could fill the racks with equipment, providing adequate cooling and power across the whole floor area. Data Centre managers could explain to their board that the data centre was full – hey look I can show you!

Current high-density infrastructure can need greater than 20KW in a single rack! Mixing these into a low-density site without proper consideration of airflow and alignment can be a disaster! The Data Centre Manager works hard and consolidates his old platforms into a much smaller number of blade servers and still the data centre is full! This time it does not have enough cooling capacity or protected power to fill all of the racks up. The Data Centre manager finds it more difficult to explain to the business, why the site is still full.

Actually 21st Century Data Centres should be measured on power delivery capability rather than area. In fact I often say that Data Centres are cheap – after all they are just large sheds on a raised floor, it is the power and cooling plant that is expensive!

Data Centre managers should stop looking at how many square metres are left in a site and focus on power and cooling capacity. Low-density sites can be set up to deliver very effective hosting for high-density platforms; it’s just basic airflow and arithmetic after all. Open spaces are a consequence of airflow and power management in a constrained site.

Malik – Is the current obsessions with green IT/green data centres anything more than a recognition of good business practices that companies should be implementing anyway?

O’Donnell – Green is just common sense; energy is not free (and the laws of supply and demand mean the more we use the more it costs per unit) so why waste it? I remember my Father being green decades ago because he hated wasting his hard earned cash on unnecessary bills!

I coined the phrase – Green is Good with no apology to the movie Wall Street. Green is a template for focussing in on how to reduce how much energy we burn and how much EBIT the business has got to give us in IT rather than where it should go to, to our shareholders.

Green often challenges the institutional memory of how data centre services are delivered. For example why are we constrained to operate data centres at between 20 and 24 degrees Celsius? Try reading the boilerplates and product specification sheets for the equipment we run in the Data Centres. I challenge you to find any current equipment with such tight constraints. This has been picked up, after much lobbying of ASHRE (American Society of Heating and Refrigeration Engineers), to change their recommendations and specifications.

Malik – And dare one suggest that the vendor community has hijacked the green bandwagon with, on occasion, less than complete integrity?!

O’Donnell – Vendors make all sorts of claims for their products, most of which are so woolly that they can get away with it. Nevertheless being seen to be green is a definite advantage for the vendors in the buying cycle. Savvy buyers should always look at the total impact of a new system on their own environment and cost base.

When buying a piece of equipment, look at how much electricity it uses, and if it has any low power modes that it can go into when lightly loaded. Copan developed a great disk array that powers down disks that are not being used – fantastic.

Look at the impact on power streams – does the equipment need AC or can it work with DC? Does it have very tight environmental constraints so it needs a costly air conditioned data centre of could we consider using fresh air to cool it for part of the year?

Malik – For example, if the forecasts of doom and gloom are to be believed, power supplies and data centre space will not be able to cope with demand in certain cities and regions within Europe – do you subscribe to this view?

O’Donnell – I don’t think this is a view it is inevitable. It is only a matter of time before we see power caps on industrial plant in Europe in much the same way that these have been imposed in parts of the US. Politically voters are much more likely to get any new power that Corporations. Under investment and poor capacity forecasting in metropolitan areas along with an always-on society (consumers as well as business) have led to dramatic per head growth in power demand in urban areas,

When I was a child the electricity meter actually stopped overnight as all appliances were firmly switched off at night. Try looking at your own baseline electrical load after the lights are out and everyone has gone to bed!

Malik – Allied to my last question, there seems to be a growing acceptance of moving data centre locations away from ‘high density business areas’, are there any downsides to this approach?

O’Donnell – In many cases this is a good thing to do, however we need to have a clear understanding of what data centres can be used for and location constraints:

Many financial services applications are network latency sensitive so the data centres hosting this need to be as close to financial centres as possible. Think about the advantages of getting market data faster than the competition!

Many video on demand applications need to deliver high network bandwidth to many consumers continuously, so caching data centres are essential located locally to urban populations.

Some data protection technologies need to be within a very limited distance of the primary data source to ensure data replication synchronisation.

For non-automated sites where much hands–on work is needed to maintain and build systems being located near to the support staff is essential. Co-location sites as a result are most often located near large IT consumers.

Malik – The more so if the concept of the ‘lights out’ data centre becomes a reality?

O’Donnell – Lights out, managed services, computing as a service would be quite a strong driver for locating data centres near major underutilised power sources. Microsoft and Google have been buying up old steel town in the mid-west USA for some time as locations for mega data centres.

Malik – Are businesses sufficiently aware of the importance of data centre/computer room design, or is there still a need for education that good design, and a constant review process, are essential to optimise performance/costs?

O’Donnell – Career progression in IT is pretty unique. If you have an aspiration to work with chemicals or foodstuffs in an industrial setting, you will generally have a degree in Engineering. IT staff seem to come from all walks of life, often a career change from a business unit to help IT business alignment means that we get a CIO who is wonderfully aware of how the business operates but woefully ignorant of how to run an efficient data centre or operate a highly reliable IT Infrastructure.

There is a huge amount of genuine ignorance about how data centres operate, what makes them efficient and best practice. I rarely see a site that is even close to being optimal. Most are worse than random because of commonly held beliefs that are just plain wrong.

Malik – Blade servers – on balance a good technology – are there any significant downsides?

O’Donnell – Blades, servers, storage and network are a great idea as long as we don’t expect to fit them at high-density into a low density 20th Century Data Centre. Remember data centres are cheap – spread the equipment out and give the cooling systems a chance to work effectively.

Malik – Software-as-a-service is gaining significant momentum. What are the implications, if any, for the data centre’s infrastructure/demands placed on it?

O’Donnell – Software as a service will be a great driver for out of town, high efficient, and highly automated sites. The large number of conflicting demands that our business requirements put on a typical CIO means that it is genuinely difficult to standardise his infrastructure.

Post a Response