The Hot Aisle Logo
Fresh Thinking on IT Operations for 100,000 Industry Executives

There are some pretty big movements happening right now resulting from the Cisco announcement of California – Unified Computing System (UCS) – their networked server offering. Politically this has partially alienated HP and IBM, who have big Server businesses and also buy a ton of Cisco equipment. So we are seeing a polarization of network suppliers even more marked that before with Cisco on one side and the rest of the vendors on the other. (plainly IBM and HP will be looking to alternate vendors if they can move their clients away. There is no certainty that they can achieve this but with IGS and EDS leading on services they may well be successful.)

Cisco have a vision of the Data Center network that is shared by EMC and VMware that the Storage Network and the Server Networks will converge quite quickly and that the SAN (Fiber Channel) becomes FC over Ethernet (FCoE) and servers communicate with each other, with the network and with storage using big tin Cisco Nexus 5000 Data Center Switches. So servers, firewall, storage,  all become appliances hanging of a unified network. If we look at what is happening in this space we can already see a number of vendors lining up with appliances such as StorMagic who deliver SAN functionality for SMEs in a VMWare Guest.

Needless to say folks like Brocade and others don’t quite see it that way and point out that Cisco is introducing lots of proprietary technology that locks you into their single vision. FCoE hasn’t quite completed all of it’s approvals through the standards bodies, but frankly it is pretty close at the data of this post.

10 Gb is still expensive but worthwhile in the core data center network as it can help leverage virtualization (both high throughput storage and high bandwidth disks are mandatory here to get good packing densities). We also need to be looking for some capability to offload the massive interrupt load that virtualized servers get from 10 Gb to an appliance like Xsigo or Cisco’s California.

How might 10G benefit data center applications? Potentially higher throughput reduces latency, enables virtualization and offers a path to a unified network combining storage and data center traffic.

All consumers of IT capacity have a need to operate at scale and repeatable. 10 Gb offers the opportunity to collapse and simplify the large blade environments that we have today with integrated switch fabrics in each frame into a large data center wide fabric with resultant improved scalability and manageability.

Higher speed networks offer the promise of much greater simplicity with collapsed architectures into a small number of core switches. Many fewer elements to manage and much greater resilience. They also support IP SANs and Virtualization much better. Offering a path from current VMware cluster failover to data center failover with much better availability as a result. As an example we might see a major power outage affecting one hall of our data center and the workload seamlessly migrating to another hall in the same site.

By the way I saw my first UCS platform at EMC World – 10KW in a 45 Kg (100 lbs) 6 U 32 inch long package. It is going to be hell to manage in a data centre.

There Are 7 Responses So Far. »

  1. Why have all of the IT Vendors lined up into camps facing off over …: They also support IP SANs and Virtualiza.. http://twurl.nl/ktrfuz

  2. #virtualization Why have all of the IT Vendors lined up into camps facing off over … http://bit.ly/NA1YD

  3. When you say “It’s going to be hell to manage” – pls can you elaborate?

  4. FCoE is not FCIP over Ethernet, FCIP already worked over Ethernet, it's tC encapsulated in TCP/IP. FCoE is the raw Fibre Channel Protocol (FCP) over a new layer 2 Ethernet protocol.

  5. Nik,

    Thanks for pointing out the inaccuracy, I have made the correction directly to the article. As you say FCoE is a layer 2 protocol.

    Steve

  6. Hi Steve

    The problem is that this is an extreme high density platform at 10KW for six blades. Cooling a number of them will be challenging, extremely challenging, unless we have help like water cooled doors, cold aisle containment or extreme density cooling.

    Steve

  7. […] Read the entire blog entry here >> All views and opinions expressed in ESG blog posts are intended to be those of the post's author and do not necessarily reflect the views of Enterprise Strategy Group, Inc., or its clients. ESG bloggers do not and will not engage in any form of paid-for blogging. Click to see our complete Disclosure Policy. For important information about using this content, please review our Terms & Conditions Tags: Cisco, cluster, data center, EMC, emc world, IT Operations, opinion, power, SAN, Servers, Storage, UCS […]

Post a Response