Facebook Twitter linkedin

Data Center Temperatures: A Hot Topic

One of the most common questions TechSite gets from our customers and a much discussed topic in the data center industry is “What is the ideal temperature for a data center?” Many clients assume there is one correct answer to this question, when in actuality it is quite complex. There are many factors that need to be taken into consideration for each individual case including, but not limited to manufacturer warranties, temperature control method (supply air, return air, server inlet), available redundancy, ASHRAE allowable vs. recommended ranges and peace of mind for data center personnel. The goal of this article is to touch on some of these issues and dismiss some of the misconceptions that cooler is always better.

Arguments for cooler temperatures:

The three common reasons customers give TechSite for keeping their data centers below 70°F (21°C) are:

  1. They’ve been doing it for years
  2. Their equipment requires these lower temperatures
  3. The cooler thermal mass of the space buys them extra response time in the event of a power failure. Often they can recall times when this has “saved the day” for them.

Although each of these reasons may have merit in some situations, the industry is shifting towards higher server inlet temperatures. Additionally, the ride time provided by a colder room temperature rarely allows enough time for a fix during a prolonged power outage.

Arguments for warmer temperatures:

  1. Many equipment manufacturers (e.g. Dell, Intel) are producing equipment compliant with the industry accepted Thermal Guidelines for Data Processing Equipment from the American Society of Heating and Refrigeration Engineers (ASHRAE). These values continue to rise, with the current recommended range of temperatures suitable for equipment being between 64 and 81°F (18 -27°C).
  2. By increasing the acceptable temperature and humidity ranges in the data center, facilities are seeing lower energy costs due to reduced compressor work load and sometimes more hours of economization (pumped refrigerant, air-side and water-side) depending on their locale. In addition to seeing reduced energy costs for existing devices equipped with economization, some facilities can consider more energy efficient cooling technologies (evaporative cooling, 100% outside air, etc.) due to the expanded allowable temperature envelope. Cooling technologies with PUEs less than 1.1 are being installed in data centers which have adopted higher server inlet temperatures.
  3. Occasional spikes into the ASHRAE allowable ranges only slightly increase the average annual failure rate of equipment. Using the ASHRAE-published temperature ranges, local bin weather data and manufacturer published failure rates based on server inlet temperatures, it is possible to see the average annual failure rate increase less than 2% when the recommended temperatures are exceeded 15+% of the time.
  4. As the return air temperature is increased, both the efficiency and the capacity of the cooling units increase. According to Liebert product literature, raising the return air temperature of a 30 Ton air conditioner (Model DS105AU) from 70°F to 80°F yields an additional 16,100 BTU/hr. (4.7 kW) of cooling. By raising the set point temperatures of units throughout a large data center a significant amount of additional capacity can be gained. This may prevent or delay the capital expense of adding more air conditioners.
  5. With the development and implementation of newer control approaches, the actual server inlet temperature can be better controlled than in the past. Until recently, monitoring and controlling air conditioner return air temperature was the primary method for data center temperature control. Variable capacity compressors, supply air control and rack mounted temperature sensors allow data center users to more effectively and precisely control the temperature they’re actually concerned with (rack inlet). If the server inlet temperatures are maintained at their desired set point, the temperature in the rest of the space is less of a concern.
  6. During a recent conversation in which I was explaining what TechSite does to a friend and employee of a Fortune 500 company, his response was that he had gone into one of his company’s data centers just to cool down (because it was extremely warm outside). Although the risk may be low, keeping temperatures much more ‘comfortable’ than the surrounding spaces may increase the risk of unqualified people entering the white space.

These six examples are just a few of the many arguments for increasing the temperature in the data center. As with any change, consultation with an engineering professional is recommended prior to increasing the temperatures. The existing infrastructure, redundancy and distribution of cooling in the data center should be thoroughly investigated and verified. Only after this verification has been completed should the temperatures begin to be raised. Increasing temperatures should be performed as an iterative process and may include temperature recording and detailed monitoring to ensure all IT equipment remains properly cooled. TechSite has a team of engineers with experience in performing feasibility studies such as this and developing implementation plans for raising data center temperatures. Please contact Jim Schrader or Jackie Kershaw for additional information regarding TechSite engineering services.

Article List

Featured Articles

Recent Articles