Latest from Air Conditioning

WaterFurnace International
hpac0824_waterfurnace_truclimate_900primary
Carrier Corp.
Photos.com/Thinkstock
Johnson Controls-Hitachi Air Conditioning
Fujitsu General America
Image

Cooling Health-Care Technology

July 1, 2009
The cooling of IT spaces requires a substantial amount of preparation

The one constant about technology is that it always is changing. Two major categories of technology in health-care facilities are information-technology (IT) and procedure rooms. This article will discuss the cooling of IT facilities and highlight some of the differences between IT facilities and procedure rooms.

IT spaces in health-care facilities can vary greatly, depending on the size of the facility and how progressively IT is used. Within the context of this article, IT includes computing, digital storage, digital backup, and Internet Protocol- (IP-) based networks. IP networks can include systems, such as phone systems.

IT optimization is a function of the application(s). Processing, storage, retrieval, imaging, graphics, transfer rate, latency, response time, reliability, etc. all enter into the equation regarding the optimum hardware and software configuration.

Because there are so many applications and permutations/combinations, generalizing requirements for data centers or other IT spaces is risky. This balancing or optimization can result in distributed or central computing. It also can result in off-site computing, either as a primary function or as a disaster-recovery backup solution.

While telecommunication facilities traditionally do not use raised floors, data centers do, although there certainly are plenty of exceptions. IT closets, such as intermediate-distribution and main-distribution frames, typically do not use a raised floor.

LIFE-CYCLE MISMATCH

One of the biggest pitfalls is the life-cycle mismatch between cooling equipment and IT equipment. According to the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), it is common for cooling equipment to have a projected life of 15 to 25 years. It is common for IT equipment to have a projected useful life of three to five years. Therefore, cooling equipment potentially can serve four or five generations of IT equipment; this represents an incredible challenge.

Often, data-center cooling is predominantly sensible cooling. As a result, cooling equipment that is particularly effective in high-sensible-heat-load conditions can be a good choice. Often, these units are referred to as CRAC (computer-room-air-conditioning) units.

CRAC units are best evaluated and tested using ANSI/ASHRAE Standard 127, Method of Testing for Rating Computer and Data Processing Room Unitary Air Conditioners. This standard includes important performance information, such as sensible coefficient of performance.

TYPICAL IT CLOSETS AND ROOMS

Even in new facilities, IT closets seem to be more of an afterthought than optimally located. They can be somewhat stranded with little room for cabling infrastructure. As a result, a cooling system may become more of a retrofit than a planned, integrated solution. The typical pitfalls associated with IT closets include:

  • No consideration for IT-closet upgrades or changes, such as in cabling or equipment.

  • No redundancy in power and cooling services.

  • No consideration for cooling services on emergency power.

  • Little provision for ease of maintenance or concurrent maintenance and operation.

IT closets typically are cooled with spot cooling or by extending the cooling system in the general area to the IT closet. Because of the often tight space constraints and overall small total tonnage, oversizing/undersizing is a significant risk, especially as upgrades are made in an IT rack without facilities engineers being adequately informed.

Computer rooms often are small compared with data centers for other applications, except for large medical facilities, campuses, and multiple locations. Typically, the smaller the data center, the more difficult it is to provide the level of redundancy, the ability to continue in full operation during equipment servicing, and the ability to handle varying loads/upgrades. The typical pitfalls associated with data centers include:

  • For raised-floor systems, a tendency for the raised-floor plenum to have inadequate air-handling capacity because of height, congestion, or leaks.

  • For overhead systems, a tendency to have inadequate flexibility to reconfigure or redistribute as IT upgrades occur.

  • Inadequate cooling-equipment redundancy, concurrent maintenance provisions, or provisions for load growth.

  • Cooling services derived from cooling systems in the area not having adequate availability 24/7 and/or being too dependent on conditions outside of the data center regarding capacity and operation. For example, a system may use primarily chilled water during cooling months and an air-side economizer at other times.

  • IT equipment not being adequately planned to establish a hot-aisle/cold-aisle configuration to avoid supply- and exit-air contamination at the IT-equipment level.

  • IT loads essentially being all sensible (different sensible-heat ratio than most other loads in a hospital). Equipment designed for high sensible loads is critical.

Page 2 of 3

LOAD PROFILE

To properly size and select equipment and systems, it is important to determine the Day 1 (first day of operation) and final load. To establish the load profile, the following should be considered:

  • Schedule: hours of operation.

  • Load: What is the load? How much does the load vary? How rapidly does the load vary and for what duration?

  • Future load: In addition to operational variation in the load, will it change over time (e.g., technology upgrades or expansions)?

  • Life cycle: What is the life cycle of the equipment being cooled, and is the cooling system expected to provide service to some other equipment or application after the first life cycle is reached (e.g., a computer life cycle of three years and cooling-equipment life cycle of 15 years)?

Because of rapid advances in technology and short life cycles, it is common for the decision on equipment to be deferred as long as possible. As a result, the HVAC design, equipment procurement, and contract award easily could occur prior to the time the equipment to be cooled is determined. These types of situations tend to produce designs that are oversized.

For IT loads, ASHRAE datacom books (see sidebar) provide help both directly and indirectly. For example, “Thermal Guidelines for Data Processing Environments” created a thermal report for IT manufacturers to use. It establishes actual, rather than nameplate, load and recognizes that load varies depending on how the equipment is configured. “Datacom Equipment Power Trends and Cooling Applications” projects future loads for computer servers, storage servers, and communication equipment.

The combination of load variation and oversizing attributed to the unknown can produce unfavorable results (e.g., cooling equipment unable to handle low load). A good accomplishment would be to identify:

  • Minimum and maximum load for Day 1 operation.

  • Minimum and maximum load for the end of the equipment (IT or procedure) useful life.

  • Minimum and maximum load for the next generation of equipment (IT or procedure).

With this information, equipment sizing and system selection can better match the load and achieve optimum performance.

OPERATING CONDITIONS

Historically, data centers have been kept particularly cold (e.g., 68°F). Predominantly, this was anecdotal. “Thermal Guidelines for Data Processing Environments” identifies a wide range of acceptable temperatures and humidities (Table 1). These thermal guidelines are critical because they create far more freedom for the designer, opportunity to consider economizers, and less-stringent humidification requirements.

PROVIDING FOR THE FUTURE

Challenges in providing for the future include:

  • Avoiding premature obsolescence of cooling equipment.

  • A sizing mismatch resulting in cooling equipment operating inefficiently or outside its performance capabilities.

  • Initial overspending by specifying and installing more equipment than was needed on Day 1.

Page 3 of 3

As previously discussed, establishing the Day 1 load profile, as well as a sense for multiple generations of IT equipment, is critical. Using these profiles and projections, a phased approach can be implemented. Figure 1 provides an example of a Day 1 layout of a data center. Notice it has allocated space not only for future equipment racks, but future cooling units (CRAC units).

Note that there are two CRAC units to provide 100-percent redundancy and/or partial load capacity. Further, note that the CRAC units discharge under the floor in the hot aisle to avoid the venture effect of perforated tiles too near to CRAC units. Figure 2 shows an example of the first two rows of racks being populated, while Figure 3 shows all rows (racks) being populated and the addition of two CRAC units.

There are various ways of providing for the future, including space only, space/power roughing, and space/power roughing/piping roughing. The choice is based on economics and level of disruption when the upgrades or expansions occur.

PROCEDURE ROOMS

Unlike data centers, the equipment used for procedure rooms, such as X-ray/imaging, magnetic resonance imaging, positron emission tomography, and computed tomography, are not standardized. They can vary significantly in shape, interface requirements, and configuration from manufacturer to manufacturer.

There can be a wide variety of challenges associated with this equipment, the rooms, and suites, such as air change, velocity, particulate, electromagnetic interference, and noise. For redundancy, sometimes a good approach is to have one cooling source be from a central system and the other be a localized unit.

SUMMARY

Some key points to remember include:

  • Cooling equipment can see four to five generations of IT equipment. Therefore, allow for major changes in need and operation.

  • ASHRAE datacom books can be a source of information for technology cooling.

  • It is critical to identify the level of redundancy required.

  • It is critical to identify the full load profile (part load and full load), as well as current and future loads.

The president of DLB Associates Consulting Engineers and a member of HPAC Engineering's Editorial Advisory Board, Don Beaty, PE, FASHRAE, was chair of American Society of Heating, Refrigerating and Air-Conditioning Engineers Technical Committee 9.9, Mission Critical Facilities, Technology Spaces and Electronic Equipment, from its inception through June 2006.

ASHRAE Datacom Books

American Society of Heating, Refrigerating and Air-Conditioning Engineers Technical Committee 9.9, Mission Critical Facilities, Technology Spaces and Electronic Equipment, has published a series of books on data-processing and communications (datacom) facilities:

  • “Thermal Guidelines for Data Processing Environments, 2nd Edition” (2009): Covers industry-endorsed environmental specifications, temperature- and humidity-measurement location, and actual, rather than nameplate, load.

  • “Datacom Equipment Power Trends and Cooling Applications” (2005): Provides trend curves for maximum load of network equipment, servers, etc.

  • “Design Considerations for Datacom Equipment Centers” (2005): Covers basics of data-center design, including HVAC, fire protection, and commissioning.

  • “Liquid Cooling Guidelines for Datacom Equipment Centers” (2006): Discusses liquid-cooling basics and performance considerations, provides vendor-neutral architectures, and compares the efficiencies of air and liquid cooling.

  • “Structural and Vibration Guidelines for Datacom Equipment Centers” (2008): A nontechnical look at serious structural and vibration challenges concerning today's data centers, it provides basic structural criteria for floor and raised-floor capacities.

  • “Best Practices for Datacom Facility Energy Efficiency” (2008): Discusses water- and air-side-economizer basics and guidelines.

  • “High Density Data Centers — Case Studies and Best Practices” (2008): Discusses 11 high-density installations, including overhead cooling, computer-room air conditioning, in-row cooling, and in-rack cooling, and establishes benchmarking and field data-collecting techniques.

To purchase these books, go to www.ashrae.org/bookstore.