The flattening of data center networks

Sept. 3, 2014
The ToR-EoR discussion is just one consideration in a changing landscape for the design of data center networks and cabling systems.

From the September, 2014 Issue of Cabling Installation & Maintenance Magazine

The ToR-EoR discussion is just one consideration in a changing landscape for the design of data center networks and cabling systems.

Over the past several months the concepts of top-of-rack (ToR) and end-of-row (EoR) data center network layouts have been the source of many seminar presentations, articles in this magazine and others, and technical papers in the cabling industry as well as the wider networking industry. Considerations of when and where to use each approach include management of network moves, adds and changes; cooling the data center facility; network scalability and, of course, cost among others. But the question of whether to use ToR or EoR-while very close to the heart of professionals in the cabling industry-is one part of a broader shift taking place in networking. Specifically, data center networks are getting flatter in terms of their switching architectures.

In essence, the flattening of data center network architectures eliminates at least one "hop" that data makes when moving from one server to another server. The traditional switching architecture is commonly called "three-tier." Working backward from the servers, those switch tiers include access switches, aggregation switches, and core switches.

In late July, Cisco turned on the first deployment of its Application Centric Infrastructure (ACI) fabric at its engineering data center facility in San Jose, CA. Shown here is the Nexus 9508 spine switch, several of which combined with Nexus 9396 leaf switches, the Cisco Application Policy Infrastructure Controller (APIC), as well as application programming interfaces to achieve the ACI fabric deployment.

Three-tier vs. fat-tree

As Gary Bernstein, RCDD, DCDC, senior director of product management at Leviton (www.leviton.com/networksolutions) has pointed out in an article he authored for this magazine, "Three-tier architecture comes with a number of disadvantages, including higher latency and higher energy requirements. New solutions are needed to optimize performance." ("New switch architectures' impact on 40/100G data center migration," Feburary 2014)

In a web-based seminar that is available for on-demand viewing, Siemon's (www.siemon.com) global director for data center solutions and services, Carrie Higbie, explains that this three-tier architecture produces a significant amount of what is called "north-south" traffic-essentially traffic that is flowing "up and down" ("north and south") through the switch tiers before ultimately arriving at its destination server. Removing at least one layer of switches reduces the amount of this north-south traffic, enabling a more "east-west" path for the data between its source server and its destination server.

Higbie points out that the well-intentioned three-tier architecture has been shown to have flaws, including the latency and energy-use issues that Bernstein also discusses, as well as others. "Three-tier was supposed to be a big problem-solver," Higbie says, "but most data centers have found there is a lot of wasteful port spend." Among that inefficient spend is the necessity to establish inactive links, particularly between the access and aggregation switches. "As you set out primary and secondary networks, only one of these can be active at a time," she explains. "You're really using about 50 percent of the spend on ports that are just going to be there in case the [primary] link goes down." Furthermore, she says, when a primary link does go down and the inactive/backup link must be used, "the switch stops traffic until it can bring up the secondary link. Depending on the data center, that wait could run from a few seconds up to a minute. That's not acceptable."

Alternative architectures-network fabrics-are now regularly being deployed rather than three-tier architectures. In Bernstein's article as well as Higbie's presentation, multiple fabric types are described and discussed, including full-mesh, interconnected mesh, centralized and virtualized switch. But the fabric that appears to be leading the market race is the fat-tree, which is also commonly called leaf-spine. Berstein explains, "Fat-tree architecture features multiple connections between interconnection switches (spine switches) and access switches (leaf switches) to support high-performance computer clustering. In addition to flattening and scaling out Layer 2 networks at the edge, fat-tree architecture also creates a non-blocking, low-latency fabric. This type of switch architecture is typically implemented in large data centers."

In a fat-tree architecture, the volume of north-south traffic flow is significantly reduced compared to what takes place in three-tier architectures. Fat-tree achieves more-efficient east-west, server-to-server communication. Illustration source: ANSI/TIA-942-A-1

Ties to virtualization

The architecture comprises two layers of switching-access switches, which connect to servers, then interconnection switches, which connect to the access switches. Within a TIA-942-A-based data center arrangement, servers reside in the equipment distribution area (EDA), while access switches can reside in either the horizontal distribution area (HDA) for end-of-row setups or in the EDA for top-of-rack setups. Interconnection switches reside in the main distribution area (MDA), or potentially in the intermediate distribution area (IDA) when an IDA exists.

In Siemon's web seminar, Higbie points out that in many instances when a data center network makes the transition from three-tiered switching to a fat-tree fabric, "What used to be a switch is now a pass-through fiber connection." This, she reminds everyone, requires network administrators to "pay attention to link loss in channels. With low-loss connectors, you can increase the number of connections you have in these channels."

The ability to efficiently achieve server-to-server communication is particularly important when a network employs virtualization. Higbie explains, "When you virtualize and have more server-to-server traffic, and you have SAN [storage area network]-to-SAN movement, approximately 80 percent of your traffic stays within the data center. You want to be sure you have communications that are very conducive to servers talking to other servers, VM [virtual machine] moves, without having to go through a number of hops." The flattening of data center networks and emergence of virtualization are inherently related.

Cisco's home cooking

Recently networking colossus Cisco (www.cisco.com) announced that at one of its own engineering facilities in San Jose, CA, it turned on the first deployment of its Application Centric Infrastructure (ACI) fabric. ACI is Cisco's technology that achieves virtualization. It did so using a spine-leaf (fat-tree) architecture. In a blog post and a video discussion, Cisco distinguished IT engineer Sidney Morgan explained the feat. "By using ACI fabric to simplify and flatten the data center network, we can reduce network operating costs as much as 55 percent and incident management roughly 20 percent."

He described the turning on of ACI as the company's "first major step toward adopting ACI fabric globally. For the deployment, we moved from a Layer 2/Layer 3 pod architecture to a spine-and-leaf architecture. In this design, every leaf switch connects to every spine switch in the fabric, helping to ensure that application nodes are at most only two hops from each other or from IP-based storage. Spine-leaf is optimal in mixed data center environments of hypervisors and physical servers so that traffic can move in an efficient east-west direction versus north-to-south.

"The deployment includes the Cisco Application Policy Infrastructure Controller (APIC), Nexus 9508 spine switches, Nexus 9396 leaf switches, and open northbound and southbound APIs for integration into many platforms for automation, orchestration, and communication with Layer 4 through 7 and virtual switching devices."

Morgan further explained that his team ran the 9508 switch in standalone mode for 90 days before converting it to fabric mode in late July. "We're migrating from a Layer 2/Layer 3 pod architecture to a flat spine-leaf architecture that fundamentally removes identity and location, and allows every node in this data center to communicate with every other node, improving overall utilization."

Many data centers deployed a spine-leaf/fat-tree architecture before Cisco did so using its own equipment this summer. And many more are destined to do so in the future.

Patrick McLaughlin is our chief editor.

About the Author

Patrick McLaughlin | Chief Editor

Patrick McLaughlin, chief editor of Cabling Installation & Maintenance, has covered the cabling industry for more than 20 years. He has authored hundreds of articles on technical and business topics related to the specification, design, installation, and management of information communications technology systems. McLaughlin has presented at live in-person and online events, and he has spearheaded cablinginstall.com's webcast seminar programs for 15 years.

Sponsored Recommendations

imVision® - Industry's Leading Automated Infrastructure Management (AIM) Solution

May 29, 2024
It's hard to manage what you can't see. Read more about how you can get visiability into your connected environment.

Adapt to higher fiber counts

May 29, 2024
Learn more on how new innovations help Data Centers adapt to higher fiber counts.

Going the Distance with Copper

May 29, 2024
CommScopes newest SYSTIMAX 2.0 copper solution is ready to run the distanceand then some.