How to reduce risk in data center design and maintenance
- By Adam Ledwell
- Sep 17, 2015
Humans are generally risk-adverse animals. We live our daily lives in constant ‘risk assessment mode’ – from deciding whether to pet a neighbor’s dog to evaluating that ghost chili curry for dinner. But while avoiding risk – and its outcome – is an easy decision for us to make on a personal level, doing the same within certain professional settings can be difficult.
Being unable to appropriately assess and manage risk can be hugely detrimental to continuity and productivity, and it can create numerous adverse effects, especially in the federal space – where public funds and global-scale continuity needs are on the line.
For the data center industry specifically, traditional resiliency and redundancy standards, such as those developed by the Telecommunications Industry Association, have given facility builders and operations a foundational process for evaluating risk. But these are still far too broad to provide an accurate measure. Government data center operators must take into consideration the facility’s design and maintenance tradeoff when evaluating investment and risk.
Assessing risk: designing the data center
The data center planning phase is the recommend point of entry for risk assessment. This includes selecting an optimal sight location, identifying IT needs, evaluating what risks should be mitigated, eliminated or accepted and designing the facility infrastructure around these factors. Once the facility is built, ongoing maintenance and disaster recovery plans should be implemented.
Site selection. When selecting a new site or evaluating an existing data center location, understanding and mitigating the geographic, regional, local, site-related and building risks will lessen the effects of downtime. Climate, electricity rates, incentives and regulations should all be considered as well.
IT needs. To identify their IT needs, data center operators must first clarify the function of the data center. Is it directly responsible for supporting critical transactions, housing important data or is it simply a failover facility? These categorizations will help operators determine the necessary level of IT infrastructure and understand what installed IT equipment requirements must be met to maintain uptime, including power and energy reliability.
Risk testing. By running operational failure tests, data center operators will be able to define what impact each scenario has on power and cooling to IT equipment. From here, they can make educated choices about which risks to accept or eliminate.
Design. Once the IT needs are evaluated and risk appetite identified, data center builders can determine whether a traditional or prefabricated build will best suit their needs.
Avoiding risk: Maintaining the data center
At this point, federal data center managers must also look to balance IT requirements and risks with efficiency. Unlike commercial data centers, federal data center facilities must adhere to very specific regulations. Two such federal directives are the Energy Independence and Security Act of 2007 and the more recent Executive Order 13514 that details sustainability goals.
For federal IT departments seeking ways to centralize and optimize their existing technology to fit into new budget requirements and meet power reduction goals, solutions like uninterruptible power supplies (UPSs) are a relatively easy place to start reducing costs while increasing efficiency.
Traditional UPSs may take up little floor space, but they can eat up a significant amount of power and generate a large amount of heat. By upgrading to modern systems, which can be upwards of 96 percent efficient, data center operators will not only decrease direct power consumption costs, but drastically reduce cooling needs.
A legacy 500kVA UPS running at 88 percent efficiency, for example, could generate $42,000 in electricity costs and $16,800 cooling costs annually. Compounded over 10 years, this UPS would cost $588,000. Meanwhile, a 500kVA UPS running at 96 percent efficient efficiency costs just $19,600 a year in direct and cooling power. Compounded over 10 years, this system would amount to a $196,000 spend – a savings of over $300,000 compared to the less-efficient unit.
While saving money is important, downtime can be utterly unacceptable for many agencies. It is often significantly less expensive to commit upfront costs to mitigating the risk of downtime, than it is to spend resources recovering from an event.
Disaster preparedness plans are crucial to ensuring optimal facility performance and avoiding the costly results of downtime – loss of productivity, financial strain, customer/user backlash, reputational damage, etc. Comprehensive disaster preparedness plans should consist of preparation and prevention, detection and incident classification, response, mitigation and recovery. These plans should be written and regularly updated as appropriate.
In our digitally driven world, where data is big and the need for it bigger, instances of data center downtime can create lasting effects. In the federal space, this rings especially true. Government agencies are not only subject to strict budgets but are under incredible pressure to increase efficiency while also ensuring security. By evaluating risk, designing for optimal functionality and maintaining for continuity, federal data centers can ensure the continued to success of their facility.
View the original article on GCN.com
Adam Ledwell is manager – ITB Federal Government Systems Engineers, Schneider Electric.