By Marc Cram, Legrand
In 2022, data centers and their power distribution equipment need to meet extreme performance demands; server cabinets and racks need to be architected for maximum adaptability. Add supercomputing and Artificial Intelligence (AI) to this equation, and it’s clear that many data centers are not plug and play—they often must conform to unique physical architectures.
Among the unique physical architectures, there is edge computing. Edge computing is designed to put applications and data closer to end-users. But space is often a significant issue, and remote monitoring and remediation is an absolute must-have process.
No matter the form factor, data center operators are constantly challenged to find custom solutions for delivering power, cooling, and connectivity.
Data centers require significant power
Properly feeding data center power needs and distributing that electrical power once it’s in the facility has always been an issue, especially when managing this power at a more granular level. So what is the solution?
In terms of remote access, power, and white space infrastructure, off-the-shelf and semi-custom solutions satisfy the needs of most data center applications. But what about the need for ongoing improvements in efficiency and sustainability?
This need leads many High-Performance Computing (HPC) installations, AI applications, hyperscale data centers, and telecom operators to find custom solutions for power density, cooling, and connectivity.
Of course, supercomputing requires everything physically close together to maximize throughput. And AI needs to be on specialized processors; and by its very nature, edge computing is inherently distributed. These power variations exist because each software workload has its unique power consumption requirements. Simply put: the application drives architectural choices for hardware and its environment.
The downside of this philosophy is that it leaves little room for distributing power to rack-mounted devices. That’s why server racks require a customized Rack Power Distribution Unit (PDU) solution; because:
- There is little room at the back of the rack to house a PDU.
- Taller racks with more servers generate high outlet density situations.
- There is a strong likelihood of little to no airflow.
The same thoughtful consideration allocated to a data center’s design must also be given to how power will be distributed to feed intense processing and AI applications.
Power distribution cannot be a second thought
AI is a revolution, and it requires massive amounts of computing power—and electricity—to devise and train algorithms. These specialized circumstances require additional thought when distributing power or capacity, and overheating could quickly become an issue. When designing a power distribution plan for an AI facility, keep these potential challenges in mind:
- You may need a PDU to help with capacity planning and maximizing electrical power utilization.
- AI facilities often require custom racks, which demand ingenuity in the location of PDUs.
- High density and higher power installations test the limitations of standard PDUs.
- Your power density goes beyond what a C19 or other standard outlets can deliver.
When it comes to edge computing, placing computing services closer to where the data is being processed benefits those who rely on it—IoT devices, smart cities, and autonomous driving cars bear testimony to this. However, these new applications also require real-time computing power to drive edge computing continuously.
Unfortunately, these mini data centers are located in remote—unmanned—locations such as at the base of cell towers or street corners, making them time-consuming and difficult to manage. In these circumstances, a different power distribution thought process must be applied, one that requires:
- Need for environmental monitoring as protection against temperature and power extremes outside the operating capabilities of the equipment.
- Presents a case for remotely monitoring and managing power consumption.
- PDUs that have onboard communications capable of scheduling outlet power on and off.
- PDUs capable of shedding the power load to maximize battery power uptime if the unit exceeds thresholds.
- Your operating environment dictates that the PDU go beyond the usual 0-60 degrees celsius.
Conclusion
As mentioned, off-the-shelf and semi-custom solutions for remote access, power, and white space infrastructure satisfies the needs of typical data center applications. However, HPC installations, AI applications, hyperscale data centers, and edge computing quickly move out of unique facilities and more into the workload-processing norm.
As this continues, operators will still need to apply custom solutions for layout, power density, cooling, and overall connectivity. Part of the customization must include careful thought to how power will be distributed throughout the rack and, just as important—how to monitor the rack’s environmental conditions.
Ordering common PDUs for highly-customized data centers is like buying D-sized batteries to power a Tesla, i.e., batteries are made for different appliances in the same manner as customized PDUs are made for specialized server applications. Don’t let power distribution be an afterthought and stall or halt your workloads.
MARC CRAM is director of New Market Development for Legrand's Data, Power, and Control division, which includes the Raritan and Server Technology brands. A technology evangelist, he is driven by a passion to deliver a positive power experience for the data center owner/operator. He earned a bachelor’s degree in electrical engineering from Rice University and has more than 30 years of experience in the field of electronics.