From the April, 2013 Issue of Cabling Installation & Maintenance Magazine
Options abound and the stakes are high, so choosing where to put cooling units in a data center is of paramount concern.
by Mark Hirst, Cannon Technologies
There is an awful lot of heat to remove from the racks in your data center. And with every technology refresh, there is ever-more heat. Per-rack dissipation has gone from 1 to 4 or 5 kW, with some facilities at 12 to 15 kW and 60 kW possible. But you need to juggle space, cooling efficiency and a load of other factors. So where should you put your chillers--within row, within rack, at the top, bottom or side?
The days when racks in the data center drew a paltry 2 kW are, for many organizations, long gone. Virtualization, blade servers, high performance computing (HPC) and massive storage arrays mean that for many, the average power draw per rack is somewhere between 15 kW and 45 kW. In highly dense environments that is expected to rise to 60 kW. All that power creates a huge amount of heat that must be dissipated.
Cooperation is key
Before beginning to address this level of load, the relationship between the information technology (IT) teams and the facilities-management teams needs to change. Data centers need to become carefully orchestrated environments in which any change is properly considered for its impact on cooling as well as its impact on IT. As load increases in the rack, it has to be planned for with cooling.
Start with these three actions.
- Use thermal imaging to take snapshots over time to see where heat is accruing. Ideally, this should tie back to workload peaks to get maximum heat.
- Computational fluid dynamic models of the data center provide a detailed view of how air is moving and where heat needs to be managed.
- Use in-rack sensors to detect changes to micro climates in racks.
Without these measures to gather data on heat, it is extremely difficult to know what needs to be dissipated and where.
Technologies
There are many ways to reduce the heat in the data center, including the following.
Placement
Once the technology has been chosen, the next step is deciding where to place it. Despite the rise of free air cooling and liquid cooling solutions, CRAC units are still the most common way of cooling a data center. As mentioned, however, once installed CRAC units are inflexible. So what should you consider when deciding on placement?
Edge of room at right angles to the hardware is no longer acceptable. It creates problems such as vortices, where the air gets trapped between equipment. Airflow is affected by placement of racks, and this encourages hot spots to occur. These hot spots then require secondary cooling to be installed.
With hot/cold aisle containment, CRAC units need to be perpendicular to the hot aisle; careful monitoring of airflow is important to ensure that heat is evenly removed from the aisle. Otherwise, hot spots will still occur.
Within row cooling (WIRC) helps to get the cooling to where it is most needed. As racks get denser and heat climbs, WIRC allows for cooling to be ramped up and down right at the source of the problem. This helps keep an even temperature across the hardware and balance costs against workload.
If the problem is not in multiple aisles, but rather within just a single row of racks, use open-door containment with WIRC. This is an approach in which the doors between racks are open, allowing air to flow across the racks but not back out into the aisle. Place the cooling units in the middle of the row, then graduate the equipment in the racks with the highest heat closest to the WIRC.
For blade servers and HPC, consider in-rack cooling. This solution works best where there are workload optimization tools that provide accurate data about increases in power load, so that as the power load rises, the cooling can be increased synchronously.
When placing any cooling solution, best practice is to keep the airflow path as short as possible. This increases the predictability of airflow and has a significant improvement on the efficiency of the solution. ::
Mark Hirst is T4 product manager at Cannon Technologies (www.cannontech.co.uk).
View Archived CIM Issues