New Data Center Prediction and Simulation Tools Cut Costs and Boost Uptime

January 23, 2017 tnielsen

When it comes to managing data center operations, system administrators often prioritize uptime. Business line executives, on the other hand, accept uptime as a given and often focus on operational cost.  One tool that satisfies both of these requirements is data center infrastructure management (DCIM) software, and it has evolved in recent years to become a critical component of both uptime and cost control.


According to the Uptime Institute (a division of the 451 Group) the market for data center infrastructure management systems will grow to $7.5 Billion by 2020. Why such growth? Newer management tools are designed to identify and resolve issues with a minimum amount of human intervention. By correlating power, cooling and space resources to individual servers, DCIM tools today, through simulation and prediction, can proactively inform IT management systems of potential physical infrastructure problems and how they might impact specific IT loads. In virtualized and dynamic cloud environments, this real-time awareness of constantly changing power and cooling capacities is important for safe server placement.

Modern planning tools can predict the impact of a new physical server on power and cooling distribution. Planning software tools also calculate the impact of moves and changes on data center space, and on power and cooling capacities.

These more intelligent tools also enable IT to inform the lines of business of the consequences of their actions before server provisioning decisions are made. Business decisions that result in higher energy consumption in the data center, for example, will impact carbon footprint and carbon tax. Charge backs for energy consumption are also possible with these new tools and can alter the way decisions are made by aligning energy usage to business outcomes.

Below are some examples of the practical uptime enhancing and cost saving advantages of DCIM systems:

  • They provide an up-front assessment of risk based on calculation-driven simulation, rather than making decisions based only on “gut feel”. By simulating the consequences of power and cooling device failure on IT equipment, they help to identify critical business application impacts.
  • They help to avoid potential downtime resulting from overloaded branch circuits or hot spots. This is accomplished by generating recommended installation locations for rack-mount IT equipment. The location selection is based on available power, cooling, space capacity, and network ports.
  • They help operators to immediately identify which servers will be affected if a particular rack or UPS happens to fail. Therefore, discovery through trial and error is avoided. They illustrate power path–from UPS to rack to individual devices–within the rack. They also measure load, and rack capacity.
  • They help provide factual evidence, rather than conjecture, when an operator needs to determine which equipment was moved and when. They achieve this by creating an audit trail for all changes to assets and work orders for a specified range of time, including a record of alarms raised and alarms removed.
  • They can help save energy costs by indicating which IT and/or cooling assets are being underutilized in the data center or server room. This is accomplished through the identification of excess capacity (either IT or cooling) so that operators can determine which particular assets can either be decommissioned or used elsewhere.
  • They help the operator to analyze whether management’s cost cutting, energy saving strategies are actually working. This can be achieved because they provide a Power Usage Effectiveness (PUE) value on a daily basis and track historical PUE.
  • They help operators to make informed decisions on which power and cooling sub-systems within the data center to optimize. Besides generating an overall PUE number, they provide a breakdown of how much energy each of the particular sub-systems is consuming.

Legacy reporting systems, designed to support traditional data centers, are no longer adequate for new “agile” data centers that need to manage constant capacity changes and dynamic loads. New DCIM tools improve IT room allocation of power and cooling (planning), provide rapid impact analysis when a portion of the IT room fails (operations), and leverage historical data to improve future IT room performance (analysis). For more information, download Schneider Electric White Paper 107 “How Data Center Infrastructure Management Software Improves Planning and Cuts Operational Costs”.

The post New Data Center Prediction and Simulation Tools Cut Costs and Boost Uptime appeared first on Schneider Electric Blog.

Previous Article
The Benefits of the Standard StruxureOn Offer for Cloud-based Data Center Monitoring
The Benefits of the Standard StruxureOn Offer for Cloud-based Data Center Monitoring

Recently I was in London attending a trade show at which we announced our StruxureOn solution to the UK and...

Next Flipbook
How Does a Data Center Alter Building Management System Requirements (BMS)?
How Does a Data Center Alter Building Management System Requirements (BMS)?