Schneider Electric in the News, week of 5/25/2015

June 22, 2015 Schneider Electric

Schneider Electric News
 
Mission Critical: Hurricane Electric Extends Global Network With Second Point Of Presence In Spain
May 28, 2015 

Hurricane Electric has announced that it has added a new Point of Presence (PoP) at Telvent Carrierhouse, located at Acer 30-32, in Barcelona. This is Hurricane Electric’s second PoP in Spain.

Operated by Schneider Electric, the Telvent Carrierhouse Barcelona is a carrier neutral facility, providing data center and high-speed connectivity solutions to customers throughout Barcelona and the surrounding area. Hurricane Electric’s new PoP in this location will furnish customers with a variety of options though 100GE (100 Gigabit Ethernet), 10GE (10 gigabit Ethernet), GigE (1 gigabit Ethernet) and 100BaseT network connections as well as lower latency and fewer router hops.

Because Hurricane Electric supports both IPv6 and Ipv4 over the same connection at no additional charge, businesses utilizing IP Transit from the company will also have the opportunity to offer their customers seamless access to IPv6 connectivity if they desire. All core nodes at this PoP employ 100 Gbps or multiple 10 Gbps (OC-192 / STM-64) connections and core routers are comprised of a combination of backbone circuits, exchange ports, and private peering connections.

“Even after opening the Madrid PoP in 2013, Hurricane Electric has seen a surging demand for quality Internet connectivity in the Spanish market, leading us to establish a new PoP in Barcelona,” said Mike Leber, president of Hurricane Electric. “I am delighted that businesses in Barcelona will now gain access to Hurricane Electric’s rich global network and have the opportunity to purchase high-speed transit at a reasonable price.”

Check out the full article: 
http://www.missioncriticalmagazine.com/articles/87447-hurricane-electric-extends-global-network-with-second-point-of-presence-in-spain 


TMCnet: Schneider Electric: More Software and the IoT to Reshape Data Center Design 
By Rich Tehrani, May 27, 2015

There is a macro trend of moving proprietary hardware systems to open systems and moving hardware to software – at this point, this shouldn’t be a surprise to anyone who tracked my GENBAND Perspectives 15 blog post or last week’s HP NFV story or even the writing I’ve done on Imagine Communications disrupting the video distribution space with SDN and virtualized solutions.

To learn more, I sat down with Schneider Electric’s Srdan Mutabdzija (above), Global Solution Offer Manager, and Jason Covitz (below), Director of Strategy for IT Business where we talked about IT infrastructure and how it will look in the future.

Jason emphasized the move to commoditized and standardized infrastructure – not just in the spaces mentioned above but oil and gas, mining and manufacturing. Srdan explained the evolution of their products and the market in general is to a more prefabricated, plug-and-play, efficient cooling solutions which lower time to market, reduce risk and allow a more seamless move to public/private clouds.

flex-pod-express-Schneider-Electric.png

An example of their product offering is the FlexPod Express for the office environment – allowing the company’s data center product to be deployed in an SMB or branch office. The solution supports products from Microsoft, NetApp and Cisco among others.
 

I also had a chance to read a white paper on the costs associated with a TCO analysis of a traditional data center vs. a scalable, prefabricated data center. The premise is standardized, scalable, pre-assembled, and integrated data center facility power and cooling modules provide a TCO savings of 30% compared to traditional, built-out data center power and cooling infrastructure. Avoiding overbuilt capacity and scaling the design over time contributes to a significant percentage of the overall savings. The results are based on a hypothetical data center in St. Louis, MO in the USA with density of 7kW/rack

You can see below the OPEX and CAPEX costs decrease considerably since the data center can be built out over time. Moreover, Significant CAPEX and OPEX savings accrue when the data center is built out to 4 MW instead of 5 MW. Moreover, running the system at a higher percent load each year results in energy savings and the capital costs savings is approximately 2% due to the cost of capital. One assumes this becomes a greater savings in a higher interest rate environment.

Schneider Electric 10 year cap cost breakout.png
Schneider Electric 10 year op cost breakout.png 

The chart below shows the savings all together over time.

Schneider Electric 10 year cost comparison.png 

Another important takeaway from my conversation is the edge isn’t going away – especially in light of the fact IoT devices will continue to generate a tremendous amount of data. Srdan said, “We are moving to a hybrid cloud with emphasis on a big data center in the middle and smaller ones on the sides.”

One final comment was enterprises not wanting to handcuff themselves to the public cloud. There are many reasons to keep data local in fact, security, the ability to know if government data requests have been made and privacy.

In short, Schneider Electric sees the data center of the future morphing to a more prepackaged offering with modules being added as needed. Moreover they see the need to ensure enough compute power is near the devices which will generate the bulk of the data. In other words the IoT will be responsible for a rethink of how corporations design and build out their macro and micro data centers.

Read the full article:
http://blog.tmcnet.com/blog/rich-tehrani/cloud-computing/schneider-electric-more-software-and.html 


Data Center Journal: Software-Defined Networking Shakes Up Converged IT 
By Himanshu Patel, May 26, 2015
The rise of technology trends such as big data and BYOD (bring your own device) are causing a seismic shift in the way data centers are managed today owing to the massive amounts of data they drive. To handle this increasing amount of data, IT managers are turning to software-defined networking (SDN), a new approach to designing, building and managing virtualized networks that separates the network’s control from the physical hardware. SDN is viewed as a manageable, cost-effective and adaptable architecture that can handle the high-bandwidth and dynamic nature of today’s applications.
 
At the most basic level, SDN is a networking architecture that serves as a software-based “traffic cop” for all the information that comes through a data center. The software, also known as the SDN controller, provides a central view of the overall network and enables IT managers to control and configure routers and switches without having to manually configure each physical appliance. With a central view, IT managers have a clear picture of their entire network, which in turn makes management easier and provides more control over network traffic.
 
But although SDN delivers the benefit of an agile, manageable network by optimizing performance, many organizations may not realize the significant impact that it can have on the hardware in a data center. The more businesses virtualize their applications, networks and infrastructure—depending more on remote software and less on hardware—the more critical data center uptime and availability become.
 
The Impact of Software and Virtualization on the Physical Data Infrastructure
In a software-defined system, where network, storage and servers are virtualized, the defining points of the environment are no longer the physical devices. By shedding physical constraints, these traditional IT assets become more dynamic and flexible—but also more critical. As a result, IT personnel have begun to shift their focus to managing the increasingly important IT assets: applications and software. Because of this trend, the data center manager has also shifted focus, taking into account the variables that IT infrastructure needs (the rack space/availability, power and cooling) to ensure they continue to run smoothly.
 
Virtualization, cloud computing and software-defined networking can have a major impact on the reliability of the physical data center infrastructure. In a software-defined data center environment, traditional systems are seldom agile enough to power and cool elastic virtual switches, for the following reasons:

  • High densities and hot spots can arise: Virtualization often leads to higher power densities. Even though virtualization may help reduce overall power consumption, virtualized equipment is installed and grouped in ways that can create local high-density areas that can then lead to hot spots. If not addressed, these challenges can threaten the reliability and availability of the data center.
  •  Rack-level power and cooling must be considered: Virtualized IT loads—particularly in highly virtualized cloud data centers—can vary in both time and location. So to ensure availability in these systems, rack-level power and cooling health must be considered before changes are made.
  •  Virtualization negatively affects PUE: Virtualization reduces IT loads, meaning the data center’s power usage effectiveness (PUE) is likely to worsen. Right-sizing both the power and cooling infrastructure to the reduced load can help improve PUE.

Elastic, versatile equipment is needed to efficiently cool and power the flexible, dynamic nature of software-defined networking. By using modular, high-performance power and cooling systems, data center managers can help ensure optimal uptime and availability while avoiding unnecessary downtime.
 
Converged Infrastructure: A New Approach to IT Infrastructure for SDN
Cooling and power systems are not the only pieces of physical infrastructure that can use an upgrade when it comes to managing software-defined networking. In fact, in a post-virtualization software-defined data center, old legacy IT stacks aren’t cut out to support the needs of virtual workloads. Since siloed physical storage and network assets lack the optimization to support virtual servers, resource overprovisioning may result. Throwing more hardware at the problem only adds more complexity and cost, and it doesn’t fix the issue at hand. Therefore, a new approach to IT infrastructure is needed.
 
Consolidating IT infrastructure components into a single optimized platform with central management—a system commonly referred to as “converged infrastructure”—can enable increased utilization and lower costs. In this way, benefits that come from software-defined networking can be realized.
 
Where legacy IT infrastructure often falls short, converged infrastructure allows you to design, build and maintain segments of the virtualization stack while supporting growth.
 
The Importance of Data Center Infrastructure Management in the Software-Defined Era
In addition to implementing a converged-infrastructure system, data center managers should consider integrating data center infrastructure management (DCIM) software. DCIM provides single-pane-of-glass visibility across the entire data center and is therefore able to inform the SDN software where to send packets for optimal efficiency and availability. Without DCIM, SDN creates a more complex and critical environment that can be difficult to manage properly, putting data centers at risk for downtime and stranded capacity.
 
The software-defined era is turning business models upside down, shaking up the way businesses run and computers compute. And every new technology implementation comes with new complexities and challenges. Businesses will need to approach their infrastructure and management in a new way to take advantage of SDN. Therefore, data center managers must ensure their facilities remain agile and flexible to meet the current and future needs of businesses.

Read the full article:
http://www.datacenterjournal.com/softwaredefined-networking-shakes-converged/


HPAC Engineering: Schneider Electric Adds Data-Center-Cooling Software Module
May 26, 2015 

Schneider Electric, specialist in energy management and automation, recently announced the launch of Data Center Operation: Cooling Optimize, a software module within its data-center-infrastructure-management (DCIM) suite StruxureWare for Data Centers. The new capability adds intelligence to existing data-center cooling systems, enabling significant reductions in energy and operational costs, as well as cooling incidents.

"Most data-center cooling systems are specified to ensure that the hottest racks in the facility have a sufficient cold-air supply,” Soeren Brogaard Jensen, vice president, enterprise software and managed services, Schneider Electric, said. “This results in a large amount of energy being wasted, as the entire facility is overcooled to provide this legacy design capacity. For the managers of these data centers, it is impossible to consider how to reduce the amount of cooling without introducing risk of thermal shutdowns because they lack the information to do so safely."

Data Center Operation: Cooling Optimize enables data-center managers to understand the complexity of airflow within their facilities, including all heat sources, cooling influences, and dependencies. It is a closed-loop system, meaning it learns from any actions, such as inlet-temperature adjustments, to continuously optimize data-center cooling.

Once deployed, Data Center Operation: Cooling Optimize enables operators to monitor the status of data-center health in real time and determine the impact of any cooling event. This enables situations such as overheating, hotspots, and capacity issues to be predicted and avoided. Through continuous analysis of use, future capacity requirements can be planned for and stranded cooling capacity eliminated. Data Center Operation: Cooling Optimize automates responses to changes in data-center environments to reduce hot spots and situations in which cooling exceeds what is needed.

In a recent study, the system was retrofitted by a large Pacific Telco provider to automatically measure, analyze, and control cooling output to match the requirements of a dynamic data-center environment. The user was able to turn off 13 computer-room-air-conditioning units, saving 37 percent in average power use in the first year of operation.

"Through the combination of retrofit software (that) learns intelligently and wireless sensors in data-center racks, data-center managers can quickly start to confidently operate their legacy facilities closer to ASHRAE inlet-temperature guidelines without risk to availability and without any investment in existing cooling systems,” Jensen said. “In use, they can anticipate up (to a) 40-percent reduction in cooling costs."

For more information on Data Center Operation: Cooling Optimize, go to: http://www.apc.com/struxureware/us/en/.


Read the full article:
http://hpac.com/new-products/schneider-electric-adds-data-center-cooling-software-module


IT Business Edge: New Approaches to IT Efficiency
By Arthur Cole, May 25, 2015 

Virtually everyone is in favor of an energy-efficient data center. But if that is the case, why has the industry struggled so mightily to reduce power consumption?

Even with the remarkable gains in virtualization and other advanced architectures, the data center remains one of the primary energy consumers on the planet, and even worse, a top cost-center for the business.

But the options for driving greater efficiency in the data center are multiplying by the day – from low-power, scale-out hardware to advanced infrastructure and facilities management software to new forms of power generation and storage. As well, there is the option to offload infrastructure completely to the cloud and refocus IT around service and application delivery, in which case things like power consumption and efficiency become someone else’s problem.

Somewhere along the data chain, however, electrons have to encounter physical resources, and driving the efficacy of that interaction will be a key function of emerging data center designs. The growing field of Data Center Infrastructure Management (DCIM) is putting a wealth of tools at the enterprise’s disposal, such as the new Cooling: Optimize module in Schneider Electric’s StruxureWare platform. Based on technology from cooling specialist Vigilent, the system utilizes a series of sensors and machine-learning control software to monitor temperature conditions and then adjust AC equipment according to need. Integrating the system into the broader StruxureWare platform is intended to lower upfront costs and produce better long-term efficiencies for an improved TCO over the data center lifecycle.

As the market matures, however, it is becoming clear that efficiency will not improve through technology alone. Rather, it requires a multi-pronged approach that incorporates tools, practices and the mindsets of key data personnel. The Uptime Institute recently combined its various efficiency programs into an official Stamp of Approval that seeks to measure success by outcomes rather than initiatives. In this more holistic view, the group seeks to improve on many of the underlying causes of data inefficiency, namely those that fall under leadership, operations and design. Participating organizations receive either a two-year Approved stamp or a one-year Activated stamp, with benchmarks spanning efforts in planning, decision-making and actions, as well as asset utilization and lifecycle management across data infrastructure. So far, the group has issued stamps to Kaiser Permanente and Mexico’s CEMEX.

As well, leading research organizations are crafting new data center models that stress efficiency of operations as a function of overall performance. The U.S. National Science Foundation Center for Energy-Smart Electronic Systems (ES2) at New York’s Binghamton University recently took a look at data center modeling techniques and found that software approaches like computational fluid dynamics are a good way to start, but fine-tuning existing facilities is much more effective when empirical data on that specific environment is utilized. This requires in-depth airflow measurement across racks, aisles and even ductwork and conduits, and even minor alterations like floor jacks and cutouts. It also helps to return a room to its native state by shutting down equipment and normalizing air pressure to get a better handle as to how the working production environment affects environmental conditions.

As I’ve mentioned in the past, power efficiency is a never-ending struggle. Systems can never be too efficient or too green, and like house-cleaning, few people notice all the work that has been done, just the parts that are still dirty.

And the sad fact is that efficiency gains will likely diminish over time, requiring more effort for less result. But as infrastructure scales up and out, even small gains could very well translate to big savings to the operational bottom line.

Read the full article:
http://www.itbusinessedge.com/blogs/infrastructure/new-approaches-to-it-efficiency.html


Data Center Knowledge: DCIM News Roundup, May 22 
By John Rath, May 22, 2015

Schneider Electric wins DCIM award at DCS Awards 2015. The Schneider Electric StruxureWare for Data Centers DCIM software won the DCIM product of the year award at the 2015 European DCS awards. Its integrated DCIM suite has won the award for the second year running, and provides live dashboards, mobile operation for real-time tracking and on-the-go access via smart phones and tablet apps.

Schneider Electric – stopping DCIM from becoming isolated. A Schneider Electric blog post discusses the future of DCIM software and the integration, interaction and interfaces that DCIM should have with other enterprise management systems.
 

Read the full article
http://www.datacenterknowledge.com/archives/2015/05/22/dcim-news-52215/

Previous Article
Schneider Electric in the News, week of 6/1/2015

A collection of recent Schneider Electric news coverage.

Next Article
Schneider Electric in the News, week of 5/11 and 5/18/2015

A collection of recent Schneider Electric news coverage.