TIA-942-C Data Center Standard Brings a Host of Changes and Updates

Nov. 21, 2024
The recently published 942-C standard makes significant changes to infrastructure requirements to support new technologies such as artificial intelligence, edge data centers, and the IoT.

The years-long undertaking to revise the Telecommunications Industry Association’s TIA-942 telecommunications infrastructure standard for data centers culminated in the publication of ANSI/TIA-942-C in May 2024. In September, the TIA’s Fiber Optics Tech Consortium hosted a webinar featuring three of the primary contributors to the revision process that brought about 942-C:

  • Jonathan Jew, principal, J&M Consultants—Editor of ANSI/TIA-942-C, Secretary, TR-42, Vice-Chair, TR-42.3
  • David Kozischek, manager enterprise networks, Corning Optical Communications—Chair, TR-42.12
  • Jacques Fluet, director, data center program, TIA

This article is a summary of information these three provided and points they made in the webinar. The hour-long webinar can be viewed in its entirety here. And you can purchase the ANSI/TIA-942-C standard here.

Edge Data Centers

Between the time the “B” and “C” revisions of TIA-942 were published, the TIA published Amendment 1 to 942-B, which covered edge data centers. That amendment was incorporated into the C revision. Furthermore, language within the C revision defines and classifies a micro edge data center (µEDC).

“A micro edge data center is a small data center in a premanufactured enclosure that is capable of being remotely monitored and located at the network edge,” Jew said. The creators of the 942-C revision took care to differentiate a µEDC from a small data center that is close to the user; TIA-942-C defines a rating system for the classification of µEDCs that provides requirements and recommendations for Type A and Type B.

A Type A µEDC is part of a network of µEDCs. “If this micro edge data center failed, others in the network could take over its function,” Jew explained. “Type B is the type you may find in a retail store, for example, that relies on a combination of measures internal to the micro edge data center to keep it running,” he added.

In that sense, a Type B µEDC is more robust than a Type A µEDC.

“There are quite a few use cases for edge data centers, which have been around for a while,” Jew continued. “They are supporting remote offices, stores, factories. If you package [this type of a data center] in an enclosed system, we have named it a micro edge data center” in the 942-C standard.

One change made within 942-C lowers the minimum floor loading for computer rooms that are less than 20 square meters (220 square feet). That minimum loading is 5 kPA (100 lb/ft2). The experts pointed out that floor loading requirements should always be confirmed by a structural engineer, but also noted that previous requirements were unnecessarily stringent for the smallest edge data centers.

Thermal Guidelines from ASHRAE

TIA-942-C includes language that originated from ASHRAE, the association that advances heating, ventilation, air-conditioning and refrigeration (HVACR) system design and construction. The guidelines and requirements adopted from ASHRAE apply to edge data centers and other facilities. Jew pointed out, “ASHRAE recommended we look at their technical bulletin ‘Edge Computing: Considerations for Reliable Operation.’” The thermal-management guidelines contained in this bulletin, intended to help data center operators control temperatures within their facilities, include procedures like “putting a tent over the edge data center, so when you open the door, the hot air from the outside doesn’t get inside the data center,” Jew explained.

TIA-942-C incorporates significant content from ASHRAE documents, including a requirement to comply with what is known as ASHRAE’s “recommended envelope.” ASHRAE’s TC 9.9 Thermal Guidelines for Data Processing Environments includes a recommended temperature and humidity envelope for different building classes. Complying with ASHRAE’s recommended temperature and humidity envelopes for classes A1, A2, A3, and A4 is a requirement of 942-C. Additionally, 942-C requires temperature and humidity for high-density air-cooled ICT equipment to meet ASHRAE’s recommended ranges for class H1.

“Operating outside these ranges is permitted for rooms that don’t support air-cooled equipment or only support equipment designed to operate outside the envelope, and any negative impacts of operating outside the envelope are determined and mitigated,” the TIA’s experts explained in the webinar.

Cabinets, Cabling, and Congestion

The practical reality of high-density cabling, and high-fiber-count cabling that is prominent in many data centers is addressed in several parts of 942-C. One is the new requirement that in distributors (main distribution area, intermediate distribution area, horizontal distribution area), cabinets must be a minimum of 800 mm (~31.5 inches) in width. The 800-mm-mininum-width requirement is also found in standards published by BICSI, CENELEC, and ISO/IEC. By comparison and also as a practical consideration, the individual tiles within a tile floor on which these cabinets typically stand are 600 mm (~24 inches) in width. “In an MDA or IDA, you would quickly run out of space for patching,” with a cabinet narrower than 800 mm, Jew said.

Kozischek added that in some of his data center clients’ facilities, “We’re seeing 32-inch minimum” requirements for cabinet width. “Everybody wants more density in cables, connectors, and hardware.” Without sufficiently wide cabinets in which to patch these dense connections, neatness is impractical in the workspace. The nearby image was taken from the TIA webinar, and was captured in a data center that uses 24-inch-wide cabinets.
“With 24-inch racks, you end up with optical cables sitting on the floor,” Jew emphasized. “They are easy to step on. Or with cabinets, the door will not close. It’s difficult to manage.”

A change in the standard’s required connector-interface type also affects the distributor areas (MDA, IDA, HDA) as well as the entrance facility/carrier room. The previous version of the standard required the use of the LC and/or MPO connectors in these areas; that requirement has been removed, which allows for use of very small form factor (VSFF) connectors in those spaces. In 942-C, the equipment outlet (EO) is the only space where the LC and/or MPO are required.

“We really need these [VSFF] high-density connectors for AI,” Jew commented. In an AI network, “each server node includes a considerable amount of kilowatts [approximately 10 to 12],” he explained. “So, in cabinets where you have only 10, 20, or 30 kw, you’ll have empty spaces and be OK. But in networking areas, you likely need these VSFF connectors.
“400G is the base speed for AI,” Jew explained. “Next year [2025], it will be 800G. And in 2026, we’ll have 1.6 Terabit, which uses 16 fibers … I was working on a design to connect two AI clusters in two different rooms, and I counted that I needed in the neighborhood of 70,000 or 80,000 fibers between the two rooms.” This massive presence of connected fibers make VSFF connectors, as well as smaller-diameter cables, necessary and invaluable elements in AI networking.

“We have moved past tens of fibers or even hundreds of fibers per connection,” Kozischek added. “We are in the thousands of fibers per connection. And it will be millions.”
TIA-942-C also includes a recommendation for a minimum of 2 fibers for horizontal backbone cabling. While single-fiber backbones can be found in video applications and passive optical networks (PON), nearly every other network type includes a 2-fiber backbone minimum. Most data centers, Jew and Kozischeck pointed out, use fibers in multiples of 4 (4, 8, 16) in their backbones.

Copper cabling also has a place in the 942-C standard, just as it has a place in many data centers, even including high-speed, high-density facilities. 942-C recognizes single-pair cabling as a transmission medium. “For IoT, single pair is a great alternative,” Jew opined during the webinar. “It allows you to power and deliver signal over a single pair of wires. In data centers, it can be used for sensors and control and security applications.” There are two categories of single-pair cabling, he also pointed out—SP1-400, which reaches distances up to 400 meters and allows up to 5 intermediate connections; and SP1-1000, which reaches up to 1 kilometer and allows up to 10 intermediate connections.

942-C also recognizes broadband coaxial cable as a medium. 75-ohm cables, typically Series 6 and 11 coaxial cable, are most often terminated to F-type connectors. Jew observed, “I still work in data centers that use broadband coaxial cable. Some is used for antennas. We also use these cable types to support broadband video distribution.”

Another copper-cabling-related requirement in TIA-942-C is to use a minimum of 2 Category 6A cables—primarily for the purpose of supporting latest-generation WiFi technologies that transmit at multiple-dozens-of-gigabits per second.

Direct-Attach Versus Structured Cabling

A question from an attendee sparked a discussion about the use of direct-attach cabling versus structured cabling in modern and next-generation networks, particularly including those supporting AI. The experts had plenty to say.

“TIA-942 says you should only use direct-attach cabling between equipment in the same cabinet or in adjacent cabinets,” Jew explained. “Some people use AOCs [active optical cables] and DACs [direct attach cables] within a row for AI connections within a pod [such as for a] GPU node and leaf switch. If the spine switches in the same row are close by, you probably are OK. But once you get past that row you really need to be using structured cabling because of the problem with tray capacity,” referring to the lack of capacity within trays for all the direct-attach cables that would be needed to serve these connections.

Kozischek added that in a relatively small data center, “direct connect is popular. But in a large data center, even going from one hall to another, you need structured cabling.”
Jew provided more insight: “In the designs we’re working on, we’re using direct connects between the GPU nodes and the leaf switches, then all the other cabling—from leafs to spines and spines to superspines—we’re using structured cabling partly because of the cable tray capacity and partly because we’re in this transition period between using 400G [8 fibers] and 800G or 1.6T, which use 2 MPOs. This allows us to use the same structured cabling to upgrade the speed rather than ripping out the DACs when I upgrade. If you’re looking long-term and you any size in terms of your deployment, you’re going to need to use structured cabling.”

Site Selection, Cooling, and Power

Since its initial publication, the TIA-942 standard series has ambitiously covered data center-related topics and considerations beyond the facilities’ telecommunications cabling systems—addressing the reality that data centers are interdependent ecosystems and one aspect or system affects the operations of others. In that vein, site selection, cooling, and power considerations are addressed in TIA-942-C.

About site selection, the TIA’s Fluet noted that risk analysis and mitigation are now necessary parts of the selection process. “The industry is finding out that looking at a 5-year flood zone is no longer valid,” he stated as one example. “A data center has to be able to be put anywhere—in an airport, near roads to control traffic.” These types of locations cannot be avoided, as they likely were in the past. The practical reality of edge and micro-edge data centers being placed as physically close to users as possible, has forced a re-thinking of site selection, and TIA-942-C’s language reflects the new reality.

The new revision’s approach to cooling also differs from previous versions’, Fluet noted. “We previously referred to AC or HVAC. Now it’s heat-removal, including direct-to-chip cooling and immersion cooling. We made sure the standard’s requirements take into account various methods of cooling.

“We’re starting to see newer versions of GPUs coming by the default with liquid cooling,” Fluet further explained.”

References to standby power systems also have been revised in 942-C. “Diesel generators have been tradition, but we’re seeing more gas generators now,” Fluet said. “Some other technologies are also in trial, including battery energy storage systems. Batteries are improving; so are fuel cells, including hydrogen fuel cells. We adapted the standard to make sure our requirements apply to whatever you are using for standby power.”

Furthermore, Fluet explained, the standard acknowledges and addresses the fact that some data center operators are generating their own power. “We’re used to seeing this in the Midwestern U.S. and in other parts of the world, where they are huge solar power plants.” Nuclear is another energy source to be recognized, he said. Some small and medium reactors and put out 20 to 50 Megawatts. “We need to make sure we don’t prevent [these technologies from being implemented] as long as resiliency and redundancy are there,” he concluded.

Tables and Rating Levels

Jew pointed out the significance of updates that were made to several tables that appear at the end of the standard. Among those changes are that “Yes/No” language has been replaced with “Required/Not Required”; fire resistance is now the same for all walls, ceilings, and roofs; seismic requirements have been simplified to align with International Building Code and telecommunications-rack requirements; security access control and monitoring has been clarified; lights-out data centers are now allowed for; and several changes to UPS and battery system requirements, including clarification of monitoring requirements, room separation, and battery room safety.

“We spent a lot of time on the UPS and battery points,” he recalled. “There was a lot of debate. These tables are meant to make the ratings a lot more streamlined and usable. We are TIA [in which the “T” stands for “telecommunications”] but we have a lot of people with expertise in electrical and mechanical systems because they design data centers for a living.”

The rating systems Jew referred to comprises 4 levels:

  • Rated 1: Basic
  • Rated 2: Redundance components
  • Rated 3: Concurrently available
  • Rated 4: Fault tolerant

Data center facilities can earn a rating from TIA. Kozischek explained, “TIA’s rating system is important for users. We have many experts who certify data centers for a living. If you wanted to be a Rated 3 data center, you can go to this section [of the 942-C standard]. Instead of having a lot of Yes/No’s, it will guide you to help you pass the audit. This is a very good change in 942-C.”

Jew emphasized the value of these rating systems by pointing to financial institutions—explaining that the TIA rating system is one of just a few that financial institutions will recognize and accept for their facilities.

Always a Work in Progress

The development and revision of industry standards is a perpetual work in progress. While several years passed between the publication of the “B” version and the “C” version of TIA-942, efforts toward adding to or refining the document never ceased. That remains the case. Jew pointed out that representatives of TIA TR-42 “are working on a white paper with the idea that it may become an addendum,” while also pointing out, “ASHRAE is developing standards for immersion cooling.”

Fluet noted that the Open Compute Project (OCP) also is currently working to establish how to connect servers to liquid, and how to design a server that is going to be connected to liquid. “All of this is going to be important,” he said.

Jew added, “Our part will be: how will the structured cabling interface? Connectors that we deal with, the impact of immersion cooling liquids on cable jackets and cable performance … if you have an optical fiber connector with an air gap, liquid will get in there.”

Kozischeck noted, “You will need sealed connectors, because it will be immersed in something. There are many unknowns.”

You can watch Fluet, Jew, and Kozischek’s complete presentation here.

We at Cabling Installation & Maintenance will continue to track standards-development activities and report them to you.

About the Author

Patrick McLaughlin | Chief Editor

Patrick McLaughlin, chief editor of Cabling Installation & Maintenance, has covered the cabling industry for more than 20 years. He has authored hundreds of articles on technical and business topics related to the specification, design, installation, and management of information communications technology systems. McLaughlin has presented at live in-person and online events, and he has spearheaded cablinginstall.com's webcast seminar programs for 15 years.

Sponsored Recommendations

imVision® - Industry's Leading Automated Infrastructure Management (AIM) Solution

May 29, 2024
It's hard to manage what you can't see. Read more about how you can get visiability into your connected environment.

Adapt to higher fiber counts

May 29, 2024
Learn more on how new innovations help Data Centers adapt to higher fiber counts.

Going the Distance with Copper

May 29, 2024
CommScopes newest SYSTIMAX 2.0 copper solution is ready to run the distanceand then some.