The Hot Aisle Logo
Fresh Thinking on IT Operations for 100,000 Industry Executives

I received a very thought provoking article about the relative merits of FCoE and Fibre Channel storage networks from Christian Illmer, Senior Director Application and Solution Management Enterprise, ADVA Optical Networking – 8G FC versus FCoE – distinguishing myth from reality.

I publish the article here in it’s entirety and add a bit of analysis myself to Christian’s points.

Here is the article:

Over the last several years, network convergence has evolved as the almighty solution to all problems in the data centre network. Struggling against rising energy bills and operating expenditures, IT managers have been pushed to believe that only a converged Ethernet infrastructure can solve their problems.

My sense is that there are a few drivers for adopting converged networking and mostly they are to do with cost and complexity reduction at the physical layer – fewer cables, lower cost HBA (host bus adapters). Perhaps fewer access layer switches and core switches (but that is a detailed cost analysis question) and reduced numbers of WAN links between paired sites.

By converging LAN, SAN and other data centre networks onto a single underlying infrastructure, enterprises can realise profound cost savings. But in a typical data centre environment, the consolidation of sites, hardware and expertise into a seamless network means bringing together a wide variety of protocols – including FC (Fibre Channel), Ethernet, ESCON (Enterprise Systems Connection) and FICON (Fibre Connection).

Here Christian has got a point – SANs are not just about FC with many enterprises needing FICON for their mainframe systems. ESCON is a bit irrelevant today.

Merely converging FC and Ethernet onto a single platform isn’t the solution that will satisfy enterprise customers. It’s true that those are the two highest-volume interfaces in today’s data centres. However, pushing for a new single standard like Converged Enhanced Ethernet (CEE) will not cause the other interfaces and their respective legacy equipment to disappear over night.

Again Christian has an extremely good point. No one is going to throw out their old FC and Core network switches in the current economic climate. It is essential that FCoE can co-exist with FC into the significant future. My sense is that the economics at the edge are strongly in favour of FCoE and 10G with latest HBA / NICs at $700 per port for FC and $135 per port for 10gigE. Edge switch device economics are also strongly in favour of 10gigE at $4000 per switch port for 8 Gbps FC and $1200 per port for 10gigE.

Distribution layer and core switch port costs and achievable fan out ratio work still needs to be done to see where the economics sit there.

It is important to consider the security and stability of a single converged platform. Multiple networks may require more management, but if problems occur, it is much easier to isolate the problem without affecting other networks. However, it is conceivable that if a single converged platform was in operation and problems were to occur, there would be significantly greater risk that more downtime and more issues would arise as staff tried to isolate problems.

Here the issue is much more complex. 10gigE is actually quite resilient, particularly when supported by an IP stack and is able to deal with multiple routes, rapid fault isolation and repair and quite good monitoring visibility. Both FC and FCoE behave very much differently and are sensitive to delay and frame loss. In fact neither of these are acceptable on the interface between an application and it’s storage layer.

FC (and this applies equally well to FCoE) was never designed for extremely large networks, where frame loss and delay might occur. Monitoring capability is somewhat immature in both resulting in a need to massively over-provision SAN fabrics to avoid the issue. Adding 10gigE into the mix is only going to make this more difficult and obscure. I would hate to be the SAN engineer diagnosing a fault that was impacting my firm’s ERP system with the CEO, CFO and CIO breathing down my neck!

Despite concerns, the future looks bright for convergence and the Ethernet protocol in the SAN market. However, today’s market still shows that 8G FC interfaces continue to grow, even though some analysts never expected 8G FC to become a commercial success. Skyrocketing sales numbers for 8G FC have helped BROCADE to gain significant market share from its rivals. Cisco learned the hard way that just having FCoE and Data Centre Ethernet switches was not enough. Today even Cisco, the biggest proponent of Data Centre Ethernet, is offering 8G FC as an interface on its FC switches.

8G FC has been a runaway success, in fact there is a global shortage of parts right now due to underestimates of demand! However 10gigE is growing even more exponentially and cannot be ignored.

Service providers are beginning to support 8G FC as the standard for WAN transport over their networks, too. In January 2009, ADVA Optical Networking and COLT, announced the market’s first deployment of an 8G FC service over distance. Featured within a customer’s SAN, the new service, built specifically upon the ADVA FSP 3000 system, provided a point-to-point link that stretched beyond 135km.

At that time, ADVA Optical Networking and COLT Telecom were the only companies capable of deploying 8Gbit/s FC over distance. Since then, the landscape has incorporated even more 8G FC. The previously dominant 2G FC and 4G FC interfaces have been replaced with 8G FC connectivity, while support for 8G FC transport has become mandatory for all data centre networking environments.

This is a pretty impressive achievement for ADVA and COLT, well done.

But what about FCoE and Data Centre Ethernet? They still remain, but now only as interconnect protocols for rack-mounted servers that minimize the amount of required network adapters and cables. The new FCoE switches are now just top-of-the-rack FCoE aggregators that feed signals to the legacy LAN and SAN-switches still in use. Today there is no standard that defines how to implement Inter-Switch-Links between those new Ethernet switches. And even if this standard were to become available, the typical bandwidth would not exceed the bandwidth that can be achieved with 8G FC.

Christian has got a pretty valid point here, the standards are new, not everyone can integrate with everyone else reliably enough yet and there is a lot more work to do. In the real world, economics have a habit of being important behavioural drivers and the cost differential (at the edge today) will win out. You might end up with a fully converged network someday but not someday soon in my opinion and certainly not if you need dial tone reliability and no risk.

So what’s the lesson learned from all this? Don’t believe all the marketing slides your network vendor presents to you – reality is a different beast. And ultimately, look out for 16G FC, it could be on your doorstep sooner than you expect!

Personally I think that throughput and performance are not the critical success factors here, in the end it all comes down to cost and I am afraid that the runes are for 10gigE and FCoE.

There Are 5 Responses So Far. »

  1. Is convergence really the right term here? I have a small infrastructure. It’s around 70 hosts, 12-15 of which are actually connected to the SAN. I’ve got around 20TB of raw storage in two EMC AX4-5s, and I’m using FC. This is just to let you know that I have no idea about what I’m going to ask ;-)

    I’ve spoken to people who have much larger SAN infrastructures than I do, and the people who run ethernet based SANs (be it FCoE or iSCSI) have all said that their SAN traffic is completely segregated from their networking traffic. None of them do any sort of complex routing, certainly not over WANs, and I haven’t heard of anyone doing anything with IP based firewalls or the like.

    I assume you speak to more storage engineers than I do, so are these features that people are using? Is the 10GbE solution only speed related, or are people taking advantage of the flexibility of the platform, or are they treating it like FC, just faster and with copper cables?

  2. Hi Matt, thanks for the comment and a really good question, which I translate as – what is all of this stuff about putting the SAN into the LAN? What about iSCSI, FCIP, FCoE, CIFS, NFS, NAS, FC etc.. and more acronyms than a reasonable person should be asked to ingest? It’s such a good question that I will write a blog article about it and try and make some sense of what it means and why it is important.

    Steve

  3. The author of this post reinforces the view that #FCoE is (today) only for the access layer: http://tr.im/zbhJ

  4. The author of this post reinforces the view that #FCoE is (today) only for the access layer: http://tr.im/zbhJ

  5. […] I received a very thought provoking article about the relative merits of FCoE and Fibre Channel storage networks from Christian Illmer, Senior Director Application and Solution Management Enterprise, ADVA Optical Networking – 8G FC versus FCoE – distinguishing myth from reality. I publish the article here in it’s entirety and add a bit of analysis myself […] Read the entire blog entry here >> […]

Post a Response