The Hot Aisle Logo
Fresh Thinking on IT Operations for 100,000 Industry Executives

I got a question from a colleague Bill about data center migrations:

What is a good range of number of servers that could be migrated over a weekend?    I know there a bunch of qualifiers (e.g. new kit versus “lift and shift”, amount of storage, etc.), but is there some type of answer you would give if asked?

I get asked that question a lot and my usual response is that we use an application centric approach, not a server approach. The question is how many applications can we move on one weekend. When you figure that out, the number of servers is then considered.

You are of course right, there are a number of factors that are to be considered when working out the number of applications including:

  • The team size.
  • The migration types involved
  • The complexity of the applications
  • The SLA levels
  • The criticality of the applications (we do not want to move too many highly complex, high criticality applications on one weekend)

Generally, we move at a relatively slow pace for the first migration weekend, then ramp up from there. In the past, I have moved anything between 1 and 10 applications on the first weekend, moving up to between 5 and 30 applications on other weekends. I suppose the maximum for that would be 200 servers if there were 1 application component on a server and an average of about 7 components per application (it depends on the complexity of your applications).

There Are 4 Responses So Far. »

  1. Steve,

    Very interesting topic. One of our company partners, who is actively involved in DC migrations programs, has mentioned that they can typically achieve a rate of 10 servers per man.day in a virtualization scenario, which might take a bit more time than just lift and shift.

    Now when looking at the total effort related to the migration, most of the effort, like for any DC project, goes into preparing the work so that you minimize the risks of disruption or downtime after the migration is complete.

    The reason why most of the effort goes into preparation is that the organizations haven't got the sufficient level of inventory accuracy for their infrastructure and application models (one of our customer reckons data quality in their CMDB should be at least 97%). Without a reliable and up to date source of information, the planning exercise is bound to overrun as people will end up arguing about what should be migrated rather than how it should be.

    In a very typical scenario, you will see project overrun by 15% and have a higher risk of incidents or outages. The way organizations tackle this problem today (the one of people not agreeing on what is the current state of the infrastructure and more importantly what makes up the application) is by sending armies of sys admins to audit the infrastructure and use their expertise to recreate an accurate baseline of how the applications are effectively running in the data center.

    Considering that a manual server audit velocity is typically around 5 to 10 servers per man.day, you quickly get the picture of the true cost of your data center migration planning exercise.

    One thing we have done at Tideway is to automate this process. From a single appliance, physical or virtual, Tideway Foundation continuously maps application relationships to physical and virtual infrastructure, including the dependencies between them. This single, automated view of application topology enables enterprises to better manage changes, especially the ones related to data center projects. See http://www.tideway.com/solutions/optimization/ for the rest of the story.

  2. Manu,

    Getting the CMDB right is absolutely central to a succesful Data Center consolidation or migration. Every point of inaccuracy is amplified, causing multiple migration failures. There is a huge place for products such as Tideway in automatically discovering assets and configurations that can cut manual effort significantly. However….

    There is no magic bullet, discovering business and support relationships is impossible to achieve by automated discovery so there is an absolute need to work closely with the business to identify risks and interactions that discovery never can.

    Steve

  3. I couldn't agree more. business services can't be automatically discovered. And CMDB as well as discovery tools are only there to make the job of the people easier (de-risk, facilitate, etc), to support the ITIL processes and the interaction between the business and the IT. But they will never capture 100% of the risks. Nevertheless they help.

    Modeling business services, which is mandatory to be able to run any DC project, can be greatly facilitated with tools such as Tideway Foundation. Our customers acknowledge that to model a fairly complex application (say running on about 50 servers) manually (e.g. in visio diagram) it will take about 25 man.hours (1/2h per server). And that is for a single update.

    The same work with Tideway Foundation (once the product is deployed), will take about 8 man.hours for the initial model. This exercise will involve the business people, so that we can capture their knowledge into patterns that Foundation uses to discover services automatically.

    Once the pattern is created, it will then discover every single instance of the application, and will let you know automatically on a daily basis if your application is deviating from the agreed baseline, something a manual audit will never manage to do proactively.

  4. I couldn't agree more. business services can't be automatically discovered. And CMDB as well as discovery tools are only there to make the job of the people easier (de-risk, facilitate, etc), to support the ITIL processes and the interaction between the business and the IT. But they will never capture 100% of the risks. Nevertheless they help.

    Modeling business services, which is mandatory to be able to run any DC project, can be greatly facilitated with tools such as Tideway Foundation. Our customers acknowledge that to model a fairly complex application (say running on about 50 servers) manually (e.g. in visio diagram) it will take about 25 man.hours (1/2h per server). And that is for a single update.

    The same work with Tideway Foundation (once the product is deployed), will take about 8 man.hours for the initial model. This exercise will involve the business people, so that we can capture their knowledge into patterns that Foundation uses to discover services automatically.

    Once the pattern is created, it will then discover every single instance of the application, and will let you know automatically on a daily basis if your application is deviating from the agreed baseline, something a manual audit will never manage to do proactively.

Post a Response