Schneider Electric in the News, week of 9/14/2015 and 9/28/2015

October 8, 2015

Schneider Electric News
Energy Manager Today: Saving Power in Data Centers 
By Carl Weinschenk, September 18, 2015 

Facebook data centerA new white paper from Schneider Electric explores the best ways of cooling a data center. One of the methods – use of cool air from the outdoor environment – can cut energy costs significantly.

Chillers and compressors are relied upon to keep IT equipment in data centers cool. The use of outdoor air, however, may reduce or even eliminate this equipment, with obvious upside energy ramifications.

Whether to use what the white paper terms direct or indirect air depends on conditions specific to each data center. The most obvious is the climate in which the data center is located. There are other issues, however. The white paper describes the approaches and the pros and cons of each.

Saving money by making data center cooling more efficient is a hot topic. Yesterday, Energy Manager Today described a story on calculating power usage effectiveness (PUE) that was posted at Data Center Knowledge. Also this week, Data Center Dynamics posted a detailed article on ways of assessing power in data centers and the impact of virtualization and other emerging approaches.

Check out the full article:
http://www.energymanagertoday.com/saving-power-in-data-centers-0115873/


GCN: How to reduce risk in data center design
By Adam Ledwell, September 17, 2015 

Humans are generally risk-adverse animals. We live our daily lives in constant ‘risk assessment mode’ – from deciding whether to pet a neighbor’s dog to evaluating that ghost chili curry for dinner. But while avoiding risk – and its outcome – is an easy decision for us to make on a personal level, doing the same within certain professional settings can be difficult.

Being unable to appropriately assess and manage risk can be hugely detrimental to continuity and productivity, and it can create numerous adverse effects, especially in the federal space – where public funds and global-scale continuity needs are on the line.

For the data center industry specifically, traditional resiliency and redundancy standards, such as those developed by the Telecommunications Industry Association, have given facility builders and operations a foundational process for evaluating risk. But these are still far too broad to provide an accurate measure. Government data center operators must take into consideration the facility’s design and maintenance tradeoff when evaluating investment and risk.

Assessing risk:  designing the data center

The data center planning phase is the recommend point of entry for risk assessment. This includes selecting an optimal sight location, identifying IT needs, evaluating what risks should be mitigated, eliminated or accepted and designing the facility infrastructure around these factors. Once the facility is built, ongoing maintenance and disaster recovery plans should be implemented.

Site selection. When selecting a new site or evaluating an existing data center location, understanding and mitigating the geographic, regional, local, site-related and building risks will lessen the effects of downtime. Climate, electricity rates, incentives and regulations should all be considered as well.

IT needs. To identify their IT needs, data center operators must first clarify the function of the data center. Is it directly responsible for supporting critical transactions, housing important data or is it simply a failover facility? These categorizations will help operators determine the necessary level of IT infrastructure and understand what installed IT equipment requirements must be met to maintain uptime, including power and energy reliability.

Risk testing. By running operational failure tests, data center operators will be able to define what impact each scenario has on power and cooling to IT equipment.  From here, they can make educated choices about which risks to accept or eliminate.

Design. Once the IT needs are evaluated and risk appetite identified, data center builders can determine whether a traditional or prefabricated build will best suit their needs.

Avoiding risk: Maintaining the data center

At this point, federal data center managers must also look to balance IT requirements and risks with efficiency. Unlike commercial data centers, federal data center facilities must adhere to very specific regulations. Two such federal directives are the Energy Independence and Security Act of 2007 and the more recent Executive Order 13514 that details sustainability goals.

For federal IT departments seeking ways to centralize and optimize their existing technology to fit into new budget requirements and meet power reduction goals, solutions like uninterruptible power supplies (UPSs) are a relatively easy place to start reducing costs while increasing efficiency.

Traditional UPSs may take up little floor space, but they can eat up a significant amount of power and generate a large amount of heat. By upgrading to modern systems, which can be upwards of 96 percent efficient, data center operators will not only decrease direct power consumption costs, but drastically reduce cooling needs.

A legacy 500kVA UPS running at 88 percent efficiency, for example, could generate $42,000 in electricity costs and $16,800 cooling costs annually. Compounded over 10 years, this UPS would cost $588,000. Meanwhile, a 500kVA UPS running at 96 percent efficient efficiency costs just $19,600 a year in direct and cooling power. Compounded over 10 years, this system would amount to a $196,000 spend – a savings of over $300,000 compared to the less-efficient unit.

While saving money is important, downtime can be utterly unacceptable for many agencies. It is often significantly less expensive to commit upfront costs to mitigating the risk of downtime, than it is to spend resources recovering from an event.

Disaster preparedness plans are crucial to ensuring optimal facility performance and avoiding the costly results of downtime – loss of productivity, financial strain, customer/user backlash, reputational damage, etc. Comprehensive disaster preparedness plans should consist of preparation and prevention, detection and incident classification, response, mitigation and recovery. These plans should be written and regularly updated as appropriate.

In our digitally driven world, where data is big and the need for it bigger, instances of data center downtime can create lasting effects. In the federal space, this rings especially true. Government agencies are not only subject to strict budgets but are under incredible pressure to increase efficiency while also ensuring security. By evaluating risk, designing for optimal functionality and maintaining for continuity, federal data centers can ensure the continued to success of their facility.  

About the Author
Adam Ledwell is manager – ITB Federal Government Systems Engineers, Schneider Electric. 

 

Check out the full article:
http://gcn.com/articles/2015/09/17/evaluating-data-center-risks.aspx


Datacenter Dynamics: When DCIM met ITSM
By , September 16, 2015 
An excerpt

DCIM delivers results, but ITSM integration makes it even more valuable

There is no question that data center infrastructure management has been recognized as a valuable piece of the data center puzzle. However the fact that DCIM encompasses everything from point solutions to full-blown management consoles means that there are many different ways to approach DCIM, as well as methodologies to integrate it into larger hardware/software management and ITSM solutions.

Information is key to successful IT management and that is exactly what DCIM solutions deliver. With current generation technologies there is a significant amount of instrumentation available that can be utilized to provide detailed information about your data center infrastructure. But it is easy to get lost in the minutia; making use of all the available data is a noble goal, but realistically, determining which sources deliver information that can be acted upon in a practical fashion can be difficult. Making that information usable and relating it to other data derived from your IT infrastructure is where the real value lies.


Tools of the trade

One of the biggest problems when dealing with the wealth of DCIM solutions is trying to figure out how to integrate them with your existing IT management tools, especially since DCIM tools often cross the divide between traditional IT and facilities management.

DCIM vendors have started to address this issue by providing entire suites of tools and services, while some are offering direct connections to major IT management platforms from vendors such as HP and BMC. The tools don’t address organizational issues related to the division of IT and facilities responsibilities, but they do allow IT to get a real world view of critical infrastructure issues that can potentially impact data center performance.

With the historical separation between IT and facilities, IT departments have often been completely insulated from facilities issues, regardless of their impact on IT. With growing operational costs, this has become an impractical way to run a data center, and tighter integration between facilities and IT means increased value in DCIM, ITSM and traditional IT management tools.

Smaller DCIM vendors are disadvantaged by the lack of integration with larger management systems and often seem blinded by the abilities of their own point solutions. When asked how they plan to integrate with other tools or large-scale management systems, the responses most often look to place the responsibility on the customer. Offering just SNMP information doesn’t cut it anymore, and the very common response “we have an open API” is equivalent to saying “you need to write code to integrate our product with what you use, or pay us to do that for you.” Neither answer builds confidence in the potential user.

To a large extent, this is why the technologies of DCIM and IT service management are converging. Once users got their hands on DCIM tools they realized that the data they delivered would be invaluable in better refining their ITSM processes and procedures. Ad hoc and makeshift integration between the two technologies demonstrated the value of this approach. And with their customers demanding a tighter integration between tools, vendors are responding all the way up the process chain.

Hardware vendors are actively working with DCIM vendors to provide the hooks necessary to deliver information about their products. DCIM vendors are taking that data, organizing it, and feeding it to integrated ITSM solutions. The ITSM solutions can then be used to better analyze service delivery and determine the most effective ways to deploy, configure, and mage the IT load within the data center.

 

 

 

Schneider Electric, which produces a suite of datacenter management tools under the name StruxureWare for Data Centers, took the view that it needed to provide its tools with a better look further up the stack, and into VMs and applications running on the hardware that their DCIM products could already see. To this end it worked with HP’s Composable Infrastructure Partners Program.

StruxureWare

StruxureWare

Source: Schneider Electric

Schneider’s integration with HP OneView Advanced means that the automatic asset identification and analysis data that OneView acquires when a new device is added to the network is passed down to the StruxureWare software. This means customers can see information derived from both products; the operational status of the server hardware in situ with related data center information, giving them a more detailed look into, and control of, their data center environment.
 

Regardless of the approach taken, it is clear that DCIM and ITSM are on convergent paths. Eventually customers will be able to deploy end-to-end solutions that combine the features of both technical approaches. It is most likely that future products will be integrated modular tools rather than large, monolithic applications, but the end result will be a combined delivery of information from both facilities and IT points of view.

View the full article:
http://www.datacenterdynamics.com/it-networks/when-dcim-met-itsm/94801.article


Growth of Digital Traffic Fuels the Rise of Micro Data Centers [#Infographic]
By Ricky Ribeiro, September 30, 2015 

With the number of devices and the data coming from those devices increasing every year, IT needs to support more smaller data centers to scale.

They say size doesn’t matter, but when it comes to IT, that’s not necessarily true.

Some of the big Internet companies, like Facebook and Google, are going hog wild and building massive data centers to support their users' bottomless data appetites.

And there’s good reason to increase our data centers’ capacities: Recent estimates say that annual global data center IP traffic will reach 8.6 zettabytes by the end of 2018.

You might think that the only answer to this explosion of digital traffic is to build large data centers, but companies actually need to go smaller with data center technology too. Micro data centers, as they’re being called, allow enterprise IT to get closer to the devices and, in theory, reduce latency and produce a smaller energy footprint.

What’s the definition of “micro” in the term micro data center? Schneider-Electric defines a micro data center as “a self-contained, secure computing environment that includes all the storage, processing and networking required to run the customer’s applications.”

To further illustrate the value of micro data centers, Schneider-Electric has compiled stats and facts around the value of micro data centers. Check out the infographic below.

View the full article:
http://www.biztechmagazine.com/article/2015/09/growth-digital-traffic-fuels-rise-micro-data-centers-infographic


MSPmentor: Are Your Cloud Data Centers Solar Proof?
By Michael Brown, September 29, 2015

As far as disaster management goes, earthquakes, floods, political instability, hurricanes and tornadoes are the usual suspects which need to be prepared against. No doubt you have taken steps to ensure that your cloud-based file sharing services are never threatened by them. But are there other, more exotic threats that are just as, or even more deadly, and which you might have failed to take into account? In this article, we are going to talk about the dark horse of disasters – solar flares.

As far as disaster management goes, earthquakes, floods, political instability, hurricanes and tornadoes are the usual suspects which need to be prepared against. No doubt you have taken steps to ensure that your cloud-based file sharing services are never threatened by them. But are there other, more exotic threats that are just as, or even more deadly, and which you might have failed to take into account?

In this article, we are going to talk about the dark horse of disasters – solar flares.

What is a solar flare?

Solar flares occur when the sun ejects a cloud of plasma and electromagnetic radiation due to a sudden release of built up magnetic forces within it. Also known as Coronal Mass Ejection (CMEs), they occur all the time, and often the sun manages to shoot one straight at us. Thankfully, the Earth’s magnetosphere forms a protective bubble all around the Earth, deflecting them away much like a force field in Star Trek.

However, CMEs can mess with our satellites, navigation and communication equipment due to the extremely powerful electromagnetic pulse (EMP) they carry, which can induce currents in conducting material such as wires and electronic circuits.

The true power of solar flares

While smaller solar flares are a concern, a powerful solar flare can do much worse. The biggest solar eruption in recorded history occurred way back in 1839. Called the Carrington Event after its observer, Richard E Carrington, the flare led to Arora Borealis being observed all the way in Hawaii and Australia which were said to be so bright that people could read during the night.

The EMP from the flare was equally impressive, setting many telegraph lines and even offices ablaze. As the industrial revolution was in its infancy, the real power of the flare was never known.

An EMP from a CME of that magnitude can destroy power transformers, cables, and telephone lines. Research conducted by Lloyd’s of London and Atmospheric and Environmental Research (AER) found that a similar event today could cost the US around 0.6 to 2.6 trillion dollars! In fact, a flare of similar intensity almost hit the Earth in 2012.

How can solar flares affect data centers?

Fortunately, solar flares usually do not project an EMP which can instantly fry electronics, although this is open to debate. An EMP from a sudden event like a nuclear blast is different from that of a solar flare, which is a long duration, low intensity event. Solar flares can only affect large scale electrical and communication infrastructure.  Therefore, not only can you expect to be without power for weeks or months if such an event occurs, but equipment which is left exposed to the grid can get destroyed due to severe voltage fluctuations.

Eric Gallant from Schneider Electric suggests that Transient Voltage Surge Suppression (TVSS) implemented at multiple levels can protect sensitive electronic equipment against power surges.

You should also consider investing in proper backup power supply. Uninterrupted Power Supply (UPS) and backup power generators can be used to ensure that your equipment does not shut down suddenly and can be turned off properly in the event of a prolonged outage.

Beyond these, basic disaster preparedness strategies which you have formulated for other natural occurrences should suffice.

Although massive solar flares hitting the Earth are rare events, the fact that they do occur frequently and have caused damage before is grounds enough to take them seriously. There is some concern in scientific circles as the sun is going through a phase of increased activity and this is expected to continue. It will therefore be in your best interest if you took the dangers posed by solar flares into account in your disaster strategy. 

View the original article: 
http://mspmentor.net/infocenter-cloud-based-file-sharing/092915/are-your-cloud-data-centers-solar-proof


Research and Markets: US Market for Green Data Centers to Grow 26% Through 2019
Posted By: Jane Edwards on 

DataCenterA new Research and Markets report predicts the U.S. market for green data centers to grow at a compound annual growth rate of 26.35 percent over the next decade.

Research and Markets said Thursday the market will be driven by the demand among data center operators for computer systems facilities that are designed to reduce energy costs and environmental impact.

Companies such as MicrosoftAppleIntelGoogle and Facebook have begun to set up data centers that run on renewable energy sources such as solar, wind, micro-hydro, geothermal and biogas fuel cells, according to the report.

In addition to the use of renewable sources, recycling of waste and free cooling are some of the concepts involved in green data facilities.

The report cited Cisco Systems, Dell, Hewlett-Packard, Emerson Network Power, IBM, Schneider Electric and Rittal as key market players.

View the full article:
http://blog.executivebiz.com/2015/09/research-and-markets-us-market-for-green-data-centers-to-grow-26-through-2019/

Previous Article
Schneider Electric in the News, October 2015
Schneider Electric in the News, October 2015

A recent collection of Schneider Electric news coverage.

Next Article
Schneider Electric in the News, week of 8/10/2015 and 8/17/2015

A recent collection of Schneider Electric news coverage.