Data center operators are being encouraged to organize their facilities so that they can run at higher temperatures. The theory is that higher operating temperatures improve overall data center efficiency by reducing the cooling effort needed, leading to reduced loads on fans and chillers and permitting more free cooling hours.
However, there are many factors to consider before you go ahead and raise cooling inlet temperatures. I recently caught up with Kevin Brown, who is currently the Chief Technology Officer and Senior Vice President Innovation, IT Division at Schneider Electric, who knows more about this subject than most. I asked him what factors operators should be taking into account. You can see the interview here.
“It’s a very interesting topic for me,” he said. “In general, I believe we’re reaching the point of diminishing returns on some of these temperature-control strategies because a data center is a very complex system. When you start raising temperatures some things actually consume more energy. You might save some energy by not running the compressor, for example, but then the IT server fans might ramp up and they will consume more energy. You really need to model that out and see what are the trade offs.”
“Your strategy will also have to take into account where you are in the world because climate is an important factor. So when you go through the math, as we did in a several real-world examples that we examined in depth, you find that in some cases you can get improvement, but in others you might actually increase your energy consumption in the data center.”
You can see these examples by downloading Schneider Electric’s white paper #221: “The Unexpected Impact of Raising Data Center Temperatures.”
“The bigger context at the end of the day is how much of an improvement are we really making especially with regard to what we need to be doing into the future. Challenges are going to increase because there are going to be more and more data centers as society needs more compute power. That growth in demand is not going to stop. I think governments will become concerned about how effectively the industry is using its energy and things like raising temperatures are not going to get us to where we will need to be.”
“We need to think about the overall challenge, and I like the term ‘energy effective’ because now I think we’re moving into an area where we will be taking care not to oversize the amount of IT needed to run a particular application. Hopefully, that will help with energy efficiency and at the same time as we do that we can start designing the physical infrastructure to meet those requirements as well. So all of that should lead us to being much more effective in our use of energy, and I think that’s really the next step function that we need to be focusing on.”
PUE (Power Usage Effectiveness) is a popular metric for helping operators gauge how efficiently they are using the entire energy budget of a data center, including both IT and the necessary supporting cooling and backup infrastructure. Is it a sufficient metric to ensure that the data centers of the future will be “energy effective?”
“It’s not only about the PUE,” said Kevin. “It’s going to be about PUE and the entire system starting with the applications and ensuring that we’re sizing the IT and other physical infrastructure properly. That is something that we have to do as an industry.”
Kevin concluded: “So although raising the temperatures might save you some energy and may be a good idea in certain scenarios, we think you should, first of all, model it out to see what the impact is going to be. But secondly you have to realize that it’s not the ‘cure all’ that we’re going to need as we continue to face the energy challenges we have before us.”
The post Energy Effectiveness Creates a New Focus for Data Centers appeared first on Schneider Electric Blog.