Separator

A Look into Data Center Cooling Technology and Methods

Separator
A Look into Data Center Cooling Technology and Methods

With the advancement and adoption of technology in almost everything in today’s world, there is no area left in our daily lives that aren’t influenced by it, and it’s no wrong to say that we are spending almost more than half of our day in the online world. Today, internet is considered as everyday necessity. From workplaces to general uses, internet has become the fuel for engines, and with almost everybody owning smartphones, most of our waking moments are spent connected online. And this is all possible because of the existence of data centers.

Naturally, the demand for real-time data transmission is an all-time high. This need for computers and other networking equipment which handles these requests is the catalyst for the immersion of the data centers. Data centers are where computing facilities and networking equipment are located and centralized. They are tasked to collect, store, process, and distribute large amounts of data. Data centers have also been around since the dawn of the modern computing age. An average data center demands a huge amount of electricity, and that is obvious considering the amount of computing power they manage to fit onto a single data floor. Not to mention an imperative factor - the cooling infrastructure that data centers demand to maintain the ideal operating environment for all that equipment. Taken together, data centers consume about three percent of the world’s electricity.

The sole purpose of data center cooling technology is to maintain environmental conditions suitable for information technology equipment (ITE) operation. Achieving this goal requires removing the heat produced by the ITE and transferring that heat to some heat sink. In most data centers, the operators expect the cooling system to operate continuously and reliably. These factors have led the data center cooling industry to surge and become a prominent in the IT space. The global data center cooling market size was valued at $8.6 billion in 2018 and is expected to grow at a compound annual growth rate (CAGR) of 13.5 percent from 2019 to 2025.

The increasing need for energy-efficient data center facilities, growing investments by managed service and colocation service providers, and increasing construction of hyper scale data centers are some factors, that are expected to drive the market in the coming future. Having proper temperature management in data center is absolutely vital to maintain the functionality of the equipment. Having an excess of warm air and humidity within a data center can create a financial burden for a business that can be avoided.

Process of Cooling Data Centers
Traditional data center cooling techniques used a combination of raised floors and computer room air conditioner (CRAC) or computer room air handler (CRAH) infrastructure. The CRAC/CRAH units would pressurize the space below the raised floor and push cold air through the perforated tiles and into the server intakes. Once the cold air passed over the server’s components and vented out as hot exhaust, that air would be returned to the CRAC/CRAH for cooling. Most data centers would set the CRAC/CRAH unit’s return temperature as the main control point for the entire data floor environment.
Some other techniques used for data center cooling are as follows:

Free Cooling - Cooling costs can account for more than half of a data center’s total annualized operating cost as energy costs and IT power consumption continue to rise. Free cooling being the most cost-effective way of data center cooling, ensures perfectly that a data center’s temperature flow is properly functioning. When this technique is used, the cooling used are minimalistic and reduce the overall expenditures for cooling. This method consists of two systems known as air-side economization and water-side economization. Air-side economization uses air from the outdoors to regulate the equipment’s coolness. This technique has its own flaws as it can potentially allow pollutants and moisture from the outdoors to enter into the data center.

Liquid Cooling - Liquid cooling is valuable in reducing energy consumption of cooling systems in data centers because of the fact that chilled water can be directly targeted to the desired area, without it being necessary to supply cool air to all areas of the facility. With the chilled water technique, the CRAH is connected to a chiller. As the chilled water travels through coils, it engulfs the heat and deposits it into the chiller. Once the water returns to the chiller, it merges with condenser water flowing through a cooling tower.

Calibrated Vectored Cooling (CVC) - CVC is a form of data center cooling technology made specifically for high-density servers. It optimizes the airflow path via equipment to allow the cooling system to handle heat more effectively, making it possible to grow the ratio of circuit boards per server chassis and utilize fewer enthusiasts.

Pumped Refrigerant - This method pumps chilled water through a heat exchanger and utilizes a cold pumped refrigerant to draw out the heat. The Pumped Refrigerant technique provides savings since it has the capacity to transmit energy from servers and it allows for humidification to be greatly reduced.

Indirect Air Evaporative System - With this technique, an air-duct that is connected to an indirect air evaporative cooler is utilized. This method is energy efficient and uses weather from the outdoors to cool the facility at times when it is cooler than the temperature inside. This air is used to add cooler air to the airflow within the data center.

Ideal Temperature for a Data Center
According to the American Society of Heating, Refrigerating, and Air Conditioning Engineers (ASHRAE), the average temperature for server inlets (that is, the air that’s being drawn into the server to cool the interior components) should be between 18 and 27 degrees Celsius (64.4 to 80.6 degrees Fahrenheit) with a relative humidity between 20 to 80 percent.

Experiment Conducted by Microsoft
Back in 2018, Microsoft decided to sink an entire data center to the bottom of the Scottish sea, plunging 864 servers and 27.6 petabytes of storage 117 feet deep in the ocean. The project was named Project Natick. In September 2020, the company resurfaced the data center and reported that the experiment was a success, revealing findings that show that the idea of an underwater data center is actually a pretty good one. On the surface, throwing an entire data center to the bottom of the ocean may seem strange, but Microsoft’s Project Natick team hypothesized that placing would result in more reliable and energy-efficient data centers.

Conclusion
Regardless of the data center cooling system a facility chooses, maintenance and monitoring is a common part of the equation. Server room monitoring and maintenance can assist the IT team in determining whether their current cooling system is working. Additionally, maintenance and monitoring can provide peace of mind to customers who are concerned with the climate readings on their equipment. Power and cooling efficiency will continue to be a top concern for data centers in the future. New generations of processors for machine learning artificial intelligence and analytics programs will require massive energy demands and generate substantial amounts of heat. Understanding your cooling requirement for data centers and implementing the best suitable technique can help your organizations flourish and excel in this ever-rising competitive business world.