Aisle containment is one piece of data center energy efficiency
From the June, 2013 Issue of Cabling Installation & Maintenance Magazine
Facilities embracing aisle-containment strategies have several questions to answer, including whether to contain the hot or cold aisles, and what else they might do to maximize efficiency.
by Patrick McLaughlin
The desire for data center managers to maximize the efficiency of their energy use is an ongoing pursuit. As the data center network is dynamic in nature--changing frequently enough that what is in place today does not necessarily represent what will be in place in just a few months--the strategies and methods used to achieve energy efficiency might need to be just as dynamic.
Aisle containment is one approach many data center managers have taken to make the best use of their power and especially their cooling resources. With aisle containment, physical barriers separate a data center's hot aisles from its cold aisles. The fundamental rationale behind aisle containment is straightforward. Isolating cold air from hot air allows for maximum cooling efficiency. In order to implement aisle containment, a data center should be configured in a hot-aisle/cold-aisle layout, as most are. In such a layout, one aisle is the area in which exhaust air from network equipment, particularly including servers, flows into the facility's open space. Traditionally, an individual walking down a data center's hot aisle would see the rear of servers and other data center network electronics on each side. By contrast in that traditional, an individual walking down a data center's cold aisle would see the front of that networking equipment. The hot-aisle/cold-aisle setup calls for a data center's cooling equipment to distribute cold air in the cold aisle, in the direction of the network equipment. Many products, approaches and best practices exist for keeping cold supply air from mixing with hot exhaust air, because such mixing will negate at least some of the cold air's intention of cooling network equipment. Physical separation of the two aisles, commonly referred to as aisle containment, is one such method.
Containing aisles
As evidence that the data center truly is a dynamic facility, the "traditional" setup just described--with exhaust air coming from the rear of network equipment--is no longer a universal truth. Lars Larsen, director of physical support products with Legrand North America's Data Communications Division (www.legrand.us), explains some of what is happening today, and how recent technological developments are shifting the hot-aisle/cold-aisle and aisle-containment landscape. "Most new data centers are being set up with hot-aisle/cold-aisle configurations," he notes. "However, effective containment can still be achieved if it's not originally laid out that way. For example, top-of-rack switches used to be front-to-back breathing. Cisco redesigned it so they are now rear-to-front breathing, so it requires no special air containment within the cabinet. Side-breathing switches require physical support products, such as baffles that provide the proper airflow."
For those implementing aisle containment, one of the most fundamental and most important questions to answer is which to contain--the hot or the cold aisle? Mark Hirst, product manager for Cannon Technologies' (www.cannontech.co.uk) T4 data center product portfolio explains, "With hot aisle containment, the exhaust air from the cabinet is contained and drawn away from the cabinets. Cold aisle containment regulates the air to be injected into the front of the hardware.
"In both cases, the ultimate goal is to prevent different temperatures of air from mixing. This means that cooling of the data center is effective and the power requirements to cool can themselves be contained and managed."
And, he adds, in a changing environment the cabling can be one element that poses a challenge. "The most common changes in a data center tend to be around cabling and pipe work," Hirst says. "What was once a controlled and well-order environment may now be a case of cable runs--power and network--being installed in an ad-hoc way. In a well-run data center, it is not unreasonable to assume this would be properly managed. But the longer it has been since the last major refit, the more likelihood of unmanaged cable chaos."
Legrand's Larsen reflects on the deliberation of which area to contain: "There are different schools of thought. If you're designing from scratch it is important to keep equipment and cabling in the cool aisle. It has been proven that higher temperatures, above 103 degrees, can affect the structured cabling and network performance. So if you do hot-aisle containment, don't put the cabling in that hot aisle.
"If the server is in the cold aisle with filler panels at the rear, you get some benefit by surrounding the servers with the cool air. They are essentially large heat sinks. If you surround them with hot air, they will absorb it and make it harder to cool--even if you are blowing cool air through it. The bottom line is that cold aisle containment can be easier to set up while protecting your network performance.
"Hot-aisle proponents typically want to get the hottest air possible into the CRAC [computer room air conditioning] unit to make it as efficient as possible," Larsen explains. "So if energy efficiency is the number-one priority, network performance will suffer due to the increase in the overall ambient temperature. On the other hand, if efficiency is prioritized second, then it is possible to keep your network protected and running with optimal uptime. Essentially, the CRAC unit may need to run at 97 percent efficiency in order to keep network performance at 100 percent.
"We would always prioritize network performance over energy efficiency, in either containment situation," he emphasizes.
Airflow strategies
Legrand is one of a number of companies providing equipment for aisle containment, accommodating either approach the user implements. But in a data center there is more to containment than containing whole aisles. "You also have to consider containment at the rack and cabinet level," Larsen says. "For example, the Cisco 7018 has a warning on it about running cables on the right-hand side where the intake cooling is, fearing this will block the switch from getting air." Products and systems exist to manage these setups, as Larsen notes: "Our full-height side baffles allow cool air to reach the switch from anywhere top to bottom on the intake side of the rack or cabinet. This allows for more-effective cable management by giving the customer the ability to split the cables to both the left and right. The Cisco Catalyst has to route cables to the right to avoid the fan, and the Cisco Nexus has cables going to the left. You should not have to sacrifice your cable management for containment."
Chatsworth Products Inc. (CPI; www.chatsworth.com) also provides a set of products and technologies for aisle containment. Specifically, CPI offers three standard configurations: cabinet-supported hot-aisle containment, frame-supported hot-aisle containment, and cabinet-supported cold-aisle containment. "As manufacturers push the thermal envelope on servers and other critical IT equipment, cooling costs are on the rise," CPI explains. "To counter this growth and reduce extraneous energy costs and power consumption, data center operators are increasingly turning to aisle containment solutions to help optimize airflow, especially in those situations where cabinet-level isolation is untenable."
Ian Seaton, CPI's global technology manager, authored a paper addressing several aspects of aisle containment. The paper, titled "How much containment is enough?" is available at CPI's website. In it, Seaton discusses the theoretical and scientific motivations behind containment strategies, as well as the practical realities of implementing the numerous solutions.
Multiple directions
Among several other topics, Seaton comments on the new reality that not all equipment is front-to-back breathing. "Equipment that does not breathe front-to-rear or front-to-top often compromises the integrity of the best containment system or, worse yet, establishes an excuse why deploying containment is not going to work and/or be worth the investment and effort," he says in the paper. "These responses to nonstandard air-path IT equipment are ill-informed and wasteful. There is no reason to compromise or even avoid containment due to equipment with sub-optimized airflow paths. For side-to-side breathing equipment, standard equipment cabinets have been available on the market for a couple years. For front-to-side, side-to-rear, side-to-front, or even rear-to-front breathing equipment, simple rack-mount shrouds and duct assemblies provide a path for integration into fully contained spaces."
Actions taken at the rack or cabinet level also can significantly contribute to the efficient use of energy, particularly energy spent on cooling.
As a data center's network equipment, transmission speeds and physical makeup change, keeping up with cooling strategies is a must. What works best in one setup may not be the best solution once a change is implemented. Data center managers continue to be challenged to maximize the efficiency of their energy spends, and administering effective airflow strategies is a key to that efficiency. ::
Patrick McLaughlin is our chief editor.
View Archived CIM Issues