In data-center thermal management, five trends are dominating discussions and driving decisions:
Cooling the edge.
Upgrading for capacity and efficiency.
Revolution in thermal controls.
System-performance accountability and certification.
Companies that address the challenges of these trends will achieve superior performance, a more productive and efficient environment, and happier customers.
Cooling the Edge
The growth of colocation and cloud computing has increased the importance of edge computing, as companies strive to provide high-bandwidth content, reduce latency, and enhance the mobile experience. Remote network closets and server rooms, once a secondary concern, are higher priorities, with companies seeking visibility into these spaces to ensure greater system availability.
Information-technology (IT) managers want to monitor environmental conditions within these spaces, view the status of equipment, and dispatch technicians to solve problems remotely. Recovery from unplanned outages must be quick and hassle-free, with the time spent by third-party service providers minimized.
Technology for the remote monitoring of temperature and humidity and the operating conditions of cooling equipment in edge spaces exists. What is coming is access to that information and the ability to manage and track troubleshooting assignments and workflows on mobile devices. Mobile management of closet cooling systems will provide a higher level of protection and security while allowing the individuals responsible for the systems to resolve alarms quickly, speed maintenance, and free technicians to focus on other tasks.
Upgrading for Capacity and Efficiency
Emerson Network Power estimates that in more than 80 percent of enterprise data centers, significant opportunities to reduce cooling energy costs by 20 to 50 percent exist. Last year, we surveyed IT, facilities, and data-center managers in the United States and Canada, learning half plan to upgrade their data-center cooling systems before the end of 2016.
The most common upgrade is the addition of variable-capacity components (fans and compressors) to adjust cooling capacity according to IT load. A 10-hp fan running at 100-percent speed, for example, uses 8.1 kWh of electricity. With a reduction in speed to 90 percent, the fan uses only 5.9 kWh, a 27-percent savings. At 70-percent speed, fan usage drops to 2.8 kWh, a 65-percent savings.
In every state, energy rebates from utilities and local governments, which help to deliver faster returns on investments, are available. Together, rebates and efficiency gains can provide payback within months of a thermal-system upgrade.
Minimizing the use of water for cooling in data centers meets not only economic and operational objectives, but sustainability ones.
We are having many new conversations about saving water with customers. A recent survey of engineers we conducted reveals more than half believe pumped-refrigerant economization will be the No. 1 technology replacing chilled-water systems over the next five years.
Large air-handling systems, such as indirect evaporative-cooling systems, are saving water. New epoxy-coated aluminum heat exchangers with relatively large surface areas allow for high levels of dry-effectiveness. This means a unit can achieve a desired supply-air temperature while remaining in dry operating mode for a relatively long time, minimizing or eliminating the need for mechanical or evaporative cooling.
Revolution in Thermal Controls
Today’s thermal controls are highly sophisticated and developed using human-centered design practices to ensure data is available when and where expected.
These new controls operate at both the individual-unit and system levels, using advanced machine-to-machine (M2M) communications, powerful analytics, and self-healing routines to ensure greater protection, efficiency, and insight into thermal conditions and operations.
By harmonizing cooling systems, avoiding conflicting operations, these controls can improve thermal-system energy efficiency by up to 50 percent compared with legacy technologies. For example, in an enterprise data center with 500 kW of IT load and energy costs of 10 cents per kilowatt-hour, annual thermal-energy consumption can be lowered from 380 kW to 184 kW, yielding $171,690 in savings. That can lower mechanical power-usage effectiveness by more than 20 percent, from 1.76 to 1.37.
At the cooling-unit level, integrated controls provide a high degree of protection and optimal performance. They monitor hundreds of units and components; include automated routines, lead/lag, and cascading; and avoid unsafe operation through self-healing capabilities.
At the system level, new supervisory controls offer a way to view thermal operations across data centers and utilize multi-unit thermal-management routines to remove heat while achieving capital and operational savings. By harmonizing the operation of multiple units and providing quick access to actionable data, these controls can cut thermal-system energy usage in half and reduce deployment costs by 30 percent.
System-Performance Accountability and Certification
Insights gained from an individual-component or individual-unit approach to data-center thermal management can be misleading. A comparison of individual components may show a performance difference of 3 percent to 5 percent. A comparison of individual cooling units may show a performance difference of 5 percent to 7 percent. Depending on how well the units interact and work with each other through built-in M2M communication and advanced algorithms, however, the performance difference may be as much as 30 percent. Advanced tools that model performance and estimate costs enable this type of system-level analysis and comparison.
Another trend in system performance is testing standardization and certification. In the past, there was no certifying body or government organization bringing accountability concerning reliability and efficiency to the data-center-cooling market. Today, the Air-Conditioning, Heating, and Refrigeration Institute certifies the capacity and efficiency of data-center cooling equipment based on ASHRAE standards and U.S. Department of Energy regulations. This gives manufacturers consistent standards for ratings and helps to ensure customers get what they pay for. States increasingly are enforcing guidelines as well, as we see in Title 24 requirements of the California Energy Commission.
As with most technology evolution, data centers will incorporate more advanced technologies at a lower cost than was possible just a few years prior. The result will be superior functionality, more productive and efficient environments, and happier customers.
Did you find this article useful? Send comments and suggestions to Executive Editor Scott Arnold at [email protected].