There is more to converged fabrics than just Fibre Channel over Ethernet. But that’s a good place to start.
Standards-making bodies are on the verge of ratifying Fibre Channel over Ethernet (FCoE) and Converged Enhanced Ethernet (CEE), also known as Data Center Bridging (DCB), as official standards and the idea of moving local area network (LAN) and Fibre Channel storage area network (SAN) traffic over the same Ethernet network is fast becoming a reality.
The unification of LAN and SAN traffic brings with it reductions in the number of cables required to connect servers to storage resources in the data center and, according to the experts, now is the time to start planning for unified network fabrics.
This cabling mess is typical of too many of today’s storage area networks, which use point-to-point connectivity architectures for their “lossless” systems.
As a transmission protocol, Ethernet has flaws. It is considered a “best-effort” network that does not always deliver data in order and may drop packets altogether due to network congestion. By contrast, storage networks require that data be delivered in order and intact. The high-performance Fibre Channel protocol was developed to create a separate “lossless” network to carry Small Computer Systems Interface (SCSI) traffic outside of the server and to storage devices.
Combining the two and sending storage traffic over Internet Protocol (IP) networks requires two things: 10-Gigabit Ethernet (10-GbE) transmission speed and enhancements to the Ethernet protocol to prevent data loss without incurring performance penalties.
Those enhancements, referred to collectively as Data Center Bridging or Converged Enhanced Ethernet, are currently being finalized by the Data Center Bridging (DCB) Task Group (TG) of the Institute of Electrical and Electronics Engineers (IEEE) 802.1 Working Group and are expected by year’s end.
The implementation of CEE on 10-GbE allows for the deployment of FCoE. FCoE encapsulates Fibre Channel frames into Ethernet frames for transmission over the LAN to the SAN. The FCoE standard—also about to be ratified by the T11 Committee of the InterNational Committee for Information Technology Standards (INCITS)—completes the puzzle by allowing storage traffic to flow reliably over 10-GbE pipes and creating a unified network fabric.
The first step toward implementing a converged or unified network fabric is making the jump from traditional network adapters to newly available converged network adapter (CNA) cards. Using current technologies, servers typically require network interface cards (NICs) for Ethernet traffic and host bus adapters (HBAs) for Fibre Channel storage traffic, each of which requires multiple cables.
Most servers use Gigabit Ethernet copper cabling for connectivity with as many as 10 cables running from the server to an access switch. There are many reasons for the cabling sprawl. Multi-core processors provide a wealth of processing power and require significant network bandwidth in support of the resulting data.
Gigabit Ethernet has become a bottleneck in the data center as customers roll out large numbers of virtual servers or virtual machines (VMs). VMs share underlying hardware resources to create multiple virtual servers, each running its own operating system and applications inside a single physical server. In some cases, server administrators are running more than 20 applications per server. This phenomenon requires significant bandwidth to each physical server.
The advent of CNAs brings the two adapter types together in a single form factor, changing the connectivity scenario to one in which a single set of connections is required rather than the traditional two—generally twisted-pair copper on the LAN side and OM3 fiber on the SAN side. According to the Fibre Channel Industry Association (FCIA), all CNAs are expected to use SFP+ devices, allowing the use of all standard and non-standard optical technologies and additionally allowing the use of direct-connect cables using the SFP+ interface.
Fibre Channel over Ethernet is an important tenet of the converged or unified networking concept, but unified network fabrics are based entirely on lossless 10-GbE, which allows for the use of FCoE, but also supports the use of other storage protocols such as Internet Small Computer Systems Interface (iSCSI), Common Internet File Systems (CIFS), and Network File System (NFS).
Remove FCoE from the equation and 10-GbE alone will reduce the number of cables required to connect servers to switches. Those fewer cables, though, will all need to fully support 10-GbE transmission.
“The honest fact is that data center architects can consolidate cable and adapter connectivity just by deploying lossless 10-GbE to the server without even going to FCoE. They can then consolidate further by implementing FCoE, resulting in a total of two connections for resiliency,” says Kash Shaikh, marketing manager for Cisco Systems’ data center solutions. “You do not have to wire again to use FCoE because you have already consolidated network traffic using FCoE-capable CNAs and lossless 10-GbE.”
Enterprise-class data centers, the biggest of the big, contend with the most complexity and most cost when dealing with hundreds of servers with thousands of network connections. These types of data centers can benefit most from a converged or unified network.
“Server problems are magnified in terms of complexity and cost in enterprise data centers,” says Shaikh. “Cabling is an issue because complex jumbles of cables make troubleshooting and diagnostics difficult and can block airflow, resulting in more money spent on power and cooling in the data center.”
Unified fabrics will initially impact the first few meters of the data center network. Servers will use FCoE with CEE at 10-Gbit/sec speeds to connect servers to the first hop, access switch. Traffic will then diverge to the LAN or the existing SAN using Fibre Channel.
One misconception is that converged networks and FCoE will necessarily require a complete upgrade of existing network infrastructure. According to Ahmad Zamer, product marketing manager for emerging technologies at Brocade, much of today’s installed base of cabling will do the trick for extremely short runs.
“Many customers think they will need all new cabling when, in reality, they can continue to use Cat 5 cables for deployments of up to 5 meters,” Zamer explains. “The initial fear was that cabling people would be hostile to the idea of network convergence, but they are happy because it simplifies maintenance. It makes it easier to find problems in the data center and reduces the number of times we have to roll a truck to the data center.”
Richard Villars, vice president of storage systems for research firm International Data Corp. (IDC), believes the adoption of FCoE and CEE technologies is more than just a technology transition. He says its part of an overall goal to change the way people build and operate data centers.
In an ideal world, data center managers will navigate to a converged network fabric as easily as walking through a wide-open data center.
“Servers are getting very small and all of the cables coming out of them are increasingly packed tighter together. There is a move afoot to a modular deployment pattern of modular systems,” says Villars. “Now is absolutely the time that data center architects should be starting to plan for future designs, in terms of planning for an environment with these converged technologies to determine how to cable and power optimally.”
IDC predicts 2010 will see an increase in converged networking pilot projects with significant technology deployments expected in 2011.
Kevin Komiega is contributing editor for Cabling Installation & Maintenance.