From the August, 2015 Issue of Cabling Installation & Maintenance Magazine
The abstraction taking place at higher layers in the network can have a real effect on the layer-one cabling infrastructure. And maybe, vice versa.
By Patrick McLaughlin
Oftentimes professionals in the cabling industry are spectators, not participants, in the early stages of computing and networking technology evolution. While vendors of servers or switches compete for market share and the proliferation of their respective technologies, the cabling industry as a whole can pretty much watch with interest before the marketplace decides winners and losers. These technology developments eventually affect the cabling marketplace as a whole, most often in the form of two questions: 1) Will this development require more or less cabling in networks? 2) What performance-level cabling will be required?
For a few years software-defined networking (SDN) has fit the above description. SDN is defined by some (i.e. Wikipedia as of mid-July) as "an approach to computer networking that allows network administrators to manage network services through abstraction of lower-level functionality. This is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). The inventors and vendors of these systems claim that this simplifies networking."
Prognosticators speak
According to several market-research and analysis firms, SDN is taking hold in data centers as well as enterprise environments. Last September, Infonetics Research (www.infonetics.com) publicly released some details of its 2014 Data Center and Enterprise SDN Hardware and Software report. The firm's directing analyst for data center, cloud, and SDN, Cliff Grossner, Ph.D. commented at the time, "There is no longer any question about SDN playing a role in data center and enterprise networks. Data center and enterprise SDN revenue, including SDN-capable Ethernet switches and SDN controllers, was up 192 percent year-over-year [2013 over 2012[. The early SDN explorers were joined in 2013 by the majority of traditional switch vendors and server virtualization vendors offering a wide selection of SDN products.
"Even more eye-opening," Grossner continued, "in-use for SDN Ethernet switch revenue, including branded Ethernet switches, virtual switches, and bare metal switches, grew more than 10-fold in 2013 from the prior year, driven by significant increases in white box bare metal switch deployments by very large cloud service providers such as Google and Amazon."
Infonetics said leaders in the SDN market would be solidified over the following two years "as 2014 lab trials give way to live production deployments." The firm forecast what it calls the "real market for SDN--that is, in-use for SDN Ethernet switches and controllers," to reach $9.5 billion by 2018.
Infonetics followed up that report with a series of 153 surveys of medium and large North American businesses. When announcing results of those surveys in February 2015, the firm said it found "79 percent are planning to have SDN in live production in the data center in 2017."
Grossner said, "As SDN in the enterprise data center grows legs in 2015, thought leadership [among vendors] in this nascent market will give way to market share leaders with measurable revenue. Respondents are moving from lab trials in 2015 to production trials in 2016 and to live production in 2017."
How much, what type?
That all sounds promising for SDN, but what is it going to mean for the two big questions we want to know: How much, and what type, of cabling? As we have published in the past ("Constructing a software-defined network over a robust infrastructure," August 2014), the current issue from a cabling perspective is not implementing SDN, but rather laying the groundwork for it.
In that article, CommScope (www.commscope.com) manager for technologies and solutions, Frank Yang, explained, "It is important to know that SDN relies heavily on a high-bandwidth and high-performance physical-layer infrastructure. SDN is essentially network virtualization, and a physical network device may be shared by multiple virtual networks … A high-bandwidth cabling infrastructure is needed to provision the necessary networking capacity shared by multi-tenants. A tenant can represent a networking service such as a cloud. The cloud basically is a service, and a high-performance cabling infrastructure is needed to guarantee cloud performance specified in the service level agreement. To build the infrastructure for SDN, 10-, 40-, or even 100-Gbit Ethernet is recommended. The 10/40/100-GbE not only provides high bandwidth, but also provides the low latency needed to achieve networking performance excellence."
In that sense, the refrain for the cabling industry is familiar: Build the most-robust infrastructure you can--not because you don't know what's coming down the pike, but because you do know what's coming, and it's going to require significant throughput capacity.
A mountain of fiber
At the same time that most in the cabling industry look at SDN as an application of sorts that could lead to the installation of more and better cabling in data center and enterprise environments, a company called Fiber Mountain (www.fibermountain.com) is taking an entirely different approach to the physical-layer cabling infrastructure's role in a software-defined network. Fiber Mountain's founder and chief executive officer is M.H. Raza, formerly an executive with ADC who led that company's (later acquired by TE Connectivity) development of the Quareo technology that allows the electronic tracing, monitoring and authentication of every connection in a network. Also a veteran of higher-network-layer manufacturers, Raza brought Fiber Mountain out of stealth mode in the later part of 2014.
Simplified--perhaps oversimplified--Fiber Mountain's essential concept is for there to be less switching gear, and more fiber, in the middle of a network. Beyond that, through the company's Glass Core technology, the fiber connections are software-defined. On its website, Fiber Mountain describes how and why many data centers have moved from a three-tier switching architecture to a two-tier architecture commonly called spine-leaf or fat-tree. Despite the efficiency gains achieved by moving from three- to two-tier switching, Fiber Mountain contends there is more efficiency to be had as we move to a one-tier network; the one-tier network architecture promises market growth and potential for physical-layer players, particularly those in the fiber-optic cabling and equipment space.
"Both the three-tier architecture and the two-tier architecture have limitations because they are ‘fixed' architectures," the company says. "Data center planners have to predetermine how many ports are required to exit a cabinet. Flexibility in this area could be a tremendous benefit and would allow data center managers to reconfigure the network when they wanted to, dynamically, and without sending human hands into the data center to do it. Fiber Mountain proposes a flexible architecture that is controlled and configured via software. The number of 10- or 40- or 100-Gbit/sec ports that exit the rack or the row can be changed dynamically, and the destinations that these ports point to are also configured via software." Software-defined fiber-optic cabling connections.
The company has developed a number of products and technologies to meet data centers' connectivity needs in the way it describes; the offering that most closely touches the cabling industry is Glass Core. Fiber Mountain explains, "The Glass Core architecture comprises 200 to 400 fiber-optic strands, using ribbon cable technology, that connect each cabinet into the Glass Core. Typical 7-foot cabinets will need 200 fibers. Larger fiber counts per cabinet are required when using taller cabinets, in very server-dense environments, and for connectivity of servers that are based on silicon photonics advancements."
These multiple-hundred strands of fiber from each cabinet connect to the Glass Core, which comprises fiber-optic connections using 24-fiber MPO-style high-fiber-count connectors and the AP-4240 Optical Path Exchange (OPX), which has the ability to reconfigure the fiber connections. Programmable light paths (PLP)are created over the physical connectivity via software to deliver traffic to intended destinations in a single hop. The ability to deliver traffic from origin to destination in a single hop is the essential value proposition of Fiber Mountain's technology. The company's proposal does not include multi-wavelength technologies such as WDM within the data center, which drives the need for a lot of fiber-optic cables. Replacing large switches with fiber-optic cables enables perpetual data center objectives like reduced power consumption, heat dissipation, less required space, lower maintenance costs, and less complexity. The company summarizes: "Replacing core and aggregation/spine switches with a mountain of fiber, simplifies the network. We make it easy for data center customers to deploy this technology because it co-resides with the existing equipment, without the need to rip and replace."
Raza said co-existence is a characteristic that network administrators are likely to find appealing because they can try the Fiber Mountain approach on a single segment of their network initially. "Fiber Mountain recommends that a customer start by deploying the Glass Core over 10 to 15 racks," the company noted. "This allows the well-understood concept of a ‘row' to be maintained, and then expanded to a larger number of interconnected racks. The flexibility offered by this technology enables many different types of architectures and formations."
Glass Core is just one part of Fiber Mountain's overall technology offering, but it is the part that hits closest to home for structured cabling professionals. In an interview, Raza explained that some of the system's other capabilities appeal to network administrators for reasons that reside farther up the network stack. For the cabling industry, when deploying Fiber Mountain's technology, the two questions--how much, and what type, of cabling?--can be answered, "Plenty," and, "high-performance."
Prognosticators say that SDN is reshaping the network switching market. Dell'Oro Group (www.delloro.com) said it believes "the demands of the cloud and higher speeds of Ethernet will cause the vendor landscape in the data center market to shift significantly … While Cisco and HP were already above five percent market share, Arista and the white-box/bare-metal switching segment joined them by virtue of their greater-than-40-percent revenue growth posted in 2014."
Fiber Mountain's ability to co-exist with current architectures, and its recommended approach of initially serving 10 to 15 racks, may allow the company and its "software-defined-fiber-connectivity" technology to capitalize on market growth.
Patrick McLaughlin is our chief editor.
Archived CIM Issues