Cape Coral, Fla., has 32 sites connected to a centralized core that is comprised of three data centers in a campus design. The data centers are located in the police department headquarters, city hall, and the emergency operations center.
The data centers are the nuclei of the city’s business operations. A failure in any part of these data centers would have a major business impact if the backup process failed. However, having three data centers allows for redundancy as systems and applications are shared and mirrored between the sites.
Several years ago, city officials decided to install a power protection system for the data center located in the city’s emergency operations center. The system would also host all the CORE systems that run applications for the city.
Fidel Deforte, Cape Coral’s network and telecommunications manager, selected the APC by Schneider Electric InfraStruxure solution to provide this power protection. The APC offering met the city’s criteria for reasonable cost, modularity, and ease of installation. The system was implemented in November 2010.
“It would be very serious if our applications or systems were to fail. Downtime could be catastrophic after four hours or more (depending on the application), resulting in negative public opinion and loss of support for IT budgets, as well as loss of applications leading to the halt of all city business,” says Deforte, “Since we also support Public Safety, there could be a ripple effect due to ‘loss of life’ situations if our applications were to fail or be offline due to loss of system power.”
The system’s modular design fits well with the other equipment cabinets in the data center, says Cape Coral’s network manager. He says that the system’s modularity provides flexibility with city power needs.
“The system has extended the working life of the servers,” Deforte adds.
Click here to learn more about APC by Schneider Electric.
APC by Schneider Electric
Program Name: APC Channel Partner Program
Year Program Established: 1991
Number of North American Partners: 22,500
North American Channel Chief: Shannon Sbar
Mission Critical: Schneider Electric Defines Physical Infrastructure Solutions For Edge Computing
April 20, 2016
High-bandwidth content, Internet of Things aggregation, and latency-sensitive applications cited as drivers of edge computing growth.
Schneider Electric has announced its strategy and capabilities for supporting enterprises, cloud, and services providers looking to deploy compute resources at “the edge.” Defined as “IT resources placed close to the end user or data source,” edge deployments present unique challenges differing from those of traditional data centers in that they are often remote and without local IT staff support. This means they require a different strategy to that of a conventional data center, as their lifecycle is longer and they must be easy to manage, secure and deploy while also being resilient.
“To support the IT requirements of today and tomorrow, more computing power is being decentralized to the network edge,” said Kevin Brown, vice president, Data Center Strategy and Technology, Schneider Electric. “With forces such as the Internet of Things (IoT), high bandwidth content and latency-sensitive applications driving this move, Schneider Electric delivers solutions and services that meet the needs of these unique environments.”
As part of its strategy, Schneider Electric outlined the five environments impacted by edge deployments and the company’s capabilities and infrastructure designed to support them:
- Regional colocation / telco data centers where their customers’ use of high bandwidth content and latency-sensitive applications is driving the growth of these domains. To support this environment, Schneider Electric’s InfraStruxure™ architecture and prefabricated modules allow for the fast modular build-out of a regional data center for quick time-to-market and low operational costs.
- Remote and branch office locations, particularly in retail and banking sectors, where IT services are being deployed to enrich the customer experience. The Schneider Electric SmartBunker™ CX and NetShelter™ SX provide highly secure, reliable, and remotely-managed one-rack solutions for these remote sites.
- Server rooms where applications must be hosted on premise for a variety of reasons such as latency, security and development flexibility. An increasing number of these applications are being hosted on converged and hyperconverged infrastructure, which simplifies the deployment and operation of the IT infrastructure. To support this environment, Schneider Electric’s InfraStruxure and prefabricated Micro Data Centers provide ease of management, security and scalability.
- Network closets where the reliable connection of employees to all their IT resources has never been more critical to company productivity. Personnel in these environments can utilize Schneider Electric’s integrated, connected solutions and data center management software, StruxureWare™, for simplifying management of distributed sites and ensuring potential equipment failures, security risks and environmental problems are identified before they cause downtime.
- Industrial sites where applications are increasingly connected and leverage data to operate their processes. This convergence of IT and OT (operational technology) requires traditional IT gear to be placed in potentially harsh environments. To support these sites, Schneider Electric’s SmartBunker™ FX provides hardened, remotely-managed solutions to securely house industrial control and IT equipment.
DatacenterDynamics: DCIM: The Big List
By Bill Boyle, April 11, 2016
The DCIM market is expected to grow from $731.5m this year to $2.81bn by 2020. In the following pages, we profile some of the key players adding to that growth.
Data center infrastructure management (DCIM) has been a controversial area. The basic concept is that data centers contain a lot of IT power, as well as a lot of basic infrastructure. Couldn’t they run more efficiently if the infrastructure were put under intelligent control?
Around 2010, as data centers expanded, the DCIM model got a lot of publicity, dozens of players emerged, and venture capital firms invested heavily. But then actual sales grew slowly, and it was widely perceived that DCIM was a bubble.
In the last couple of years, expectations have been revise downwards, and the players remaining are being realistic about their prospects. So now seems a good time to round up the major names.
This is not a comprehensive list of DCIM vendors—but it does include many of the players we think are making a difference.
Schneider Electric: StruXureware
A management software suite designed to collect and manage information about data center assets, resource use and operation status throughout the lifecycle. This information is then distributed, integrated, and applied in ways that help managers optimize the data center’s performance. (Image Source: Schneider)
Schneider recently revealed it is testing a microgrid system which includes a 400 kilowatt photovoltaic system, to develop, test and showcase microgrid energy management solutions. The company’s microgrid controller and StruxureWare Demand Side Operation will optimize use of photovoltaic energy, its storage and any of the client facility’s existing generator sets during grid-connected operation.
Major customers: Mercy Health USA, Microsoft Technology Center, France, Tatts Group
View the full article:
Data Center Frontier: Does OCP Compute for Rack Makers?
By Rich Miller, April 27, 2016
Kevin Brown, vice president, Global Data Center Strategy and Technology at Schneider Electric, with one of the company's Open Rack V2 designs at the recent Open Compute Summit. (Photo: Rich Miller)
SAN JOSE, Calif. – Can the largest players in the data center power and cooling business benefit from the Open Compute Project? Schneider Electric and Emerson Network Power are actively engaging with the open hardware movement, introducing racks, power equipment and even new business units designed to build upon the community’s progress.
The Open Compute Project (OCP) is a growing community of open source hardware hackers who are building on design innovations created for Facebook’s data centers. Over the past five years, a new generation of hardware vendors has leveraged open source OCP designs to win business in the hyperscale computing market, often at the expense of OEM incumbents like Dell and HPE.
The impact of Open Compute designs will increase as more infrastructure sales shift to cloud platforms, according to IDC, which sees cloud spending growing at a 20 percent annual rate while investment in on-premises enterprise data centers is declining slightly.
That’s why Schneider and Emerson were very visible at last month’s Open Compute Summit in San Jose, showing off racks and components designed for the OCP power shelf, which shifts power supplies from the server chassis to the rack.
Schneider Targets OCP for Hyperscale [Growth
“The Open Compute Project has been successful in bringing industry leaders together to collaborate on IT designs for large scale data centers,” said Kevin Brown, vice president, Global Data Center Strategy and Technology at Schneider Electric.
Schneider introduced a new Open Rack V2, as well as concept designs for a high density, high efficiency Power Supply Unit (PSU) and Battery Backup Unit (BBU).
The Open Rack provides a 21-inch wide slot for servers, expanding upon the 19-inch width that has long been the standard for data center hardware. The wider form factor will create more room for improved thermal management, as well as better connections for power and cabling. Power supplies are now separate from the server motherboards and reside in a “power shelf” at the base of the rack, where they tie into the busbar at the rear of the unit.
Schneider Electric’s new Open Rack V2, with power shelf components. (Photo: Rich Miller)
To work with OCP clients, Schneider has also started a new business unit led by Liang Zhang, who has been named Director – OCP Offer Development.
The company also offered a white paper and reference designs examining the tradeoffs in implementing Open Compute server and rack designs in facilities with traditional power infrastructure.
“When people talk about saving money with Open Compute, there’s an awful lot about the rack and IT level, but not much about the upstream power architecture,” said Brown. “For OCP to get broad adoption, it’s important to see the full picture.”
Brown said Schneider’s calculations found that the cost of a traditional 2N power infrastructure was approximately $2.77 a watt, compared to $1.53 per watt for an OCP-specific setup with 1N infrastructure. About 31 percent of the savings from OCP was achieved by shifting from 2N to 1N power.
That configuration will be a harder sell for enterprises, said Brown, who urged OCP enthusiasts to consider a range of power designs. “We think most enterprises will want to retain a 2N infrastructure,” said Brown.
To bridge the gap, Schneider has developed a reference architecture for “simplified 2N” power design that splits the difference, coming in at a cost of about $2.15 per watt. A major variable in the cost calculation is the cost of power supplies, which must be matched to the load.
View the original article:
SearchCIO: Hyper-convergence infrastructure: CIOs mull hardware savings
By John Moore, April 28, 2016
Hyper-converged infrastructure may or may not cut hardware costs, but some CIOs and other industry executives suggest an emphasis on sticker price misses the point.
Hyper-converged technology: Beyond hardware
Craig McKesson, executive vice president of enterprise services at T5 Data Centers, said HCI provides an efficiency edge. T5, a data center services provider, in March entered a partnership with hyper-converged vendor Pivot3, opening a Pivot3 Center of Excellence in its 100,500 square foot Atlanta data center. The arrangement makes HCI an option for T5's customers, which include healthcare, government, gaming, transportation and retail organizations.
In conjunction with the HCI center, T5 has rolled out Schneider Electric's StruxureWare data center infrastructure management tool, which McKesson said provides insight into data center efficiency.
"As our customers move more toward technologies like HCI, it is going to allow us to become more efficient as it relates to the utilization of our space," McKesson said. "Instead of taking up 12 u [of rack space], we can do something in 2 u. It allows us to better plan and run our data centers."
View the original article:
SearchDataCenter: Why some IT leaders aren't falling for edge data centers
By Robert Gates, April 25, 2016
Buzzword bingo games these days typically include 'edge data centers,' but how much real interest is there among users?
New data center terminology is always being tossed around, but IT pros often wonder if anyone's really doing anything with them in practice.
A study sheds new light on whether one of those concepts, edge data centers, is gaining traction and whether there are any benefits. In short, nearly 500 IT professionals expressed "a mild interest in edge data centers," -- but few have any current plan to use one.
The survey of 492 IT pros, with slightly more than a third from the executive level, found that 18% currently use an edge data centerand 46% plan to add an edge data center within the next year. But more than half, 54%, "do not plan to add an edge data center." The survey was conducted by Wyoming-based service provider Green House Data.
Research firm Gartner Inc. suggests an even lower level of interest, estimating only about 5% of organizations are using edge data centers, mostly through cloud computing and cloud brokers, according to Rakesh Kumar, managing vice president of the infrastructure strategies team at Gartner.
Most enterprises are not in edge data centers -- but Kumar says global companies are starting to embrace it.
"It is going to happen in the next two years -- pretty fast, in my opinion," he said.
Edge data centers are part of a hub and spoke architecture that is built around the idea that everything will not be processed in one place. For example, a U.S. car manufacturer could route telemetry data from its cars worldwide to the U.S. for research and development, maintenance, and support, while other data could be sent to edge data centers closer to the cars for analytical processing. This reduces risk by eliminating lengthier transmission, which can be a greater security threat, Kumar said.
Edge data centers will be less about selecting the best hardware and more about the best way to process information passing between different sites -- a "mother ship" main data center surrounded by micro data centers that are closer to users or a set of applications, Kumar said.
"The mindset needs to be, 'Where's the best place to do the processing at the lowest cost and create the best information for my organization?'"
Lower bandwidth costs
The benefits of edge data centers start with lower bandwidth costs from shorter backbone transport -- 52% of those survey respondents listed that as a benefit from a list of five choices. Exactly half of the respondents identified advantages with cheaper colocation space away from expensive primary markets and access to more content providers and carriers. Nearly half of respondents (47%) see the possibility of lower latency for local markets.
Edge data centers are an evolving topic and still an ambiguous term, which, in part, led to the survey, noted Steven Dreher, director of solution architecture at Green House Data.
Green House defines the "edge" as the destination where data is going and where people are using it. This implies a boundary, "a physical proximity to something," according to Dreher. He sees one side of that boundary as data and other side as the consumers of data.
The survey sought to get a better handle on the business case for edge data centers as well as the challenges and pain points, plus to get an understanding of how data on the edge is being used.
Edge data centers are part of a revitalization of on-premises computing, according to Steven Carlini, senior director, data center global operations at Schneider Electric in Andover, Mass. Some of the newest uses for edge computing include facial recognition applications for retail companies, sporting events and tourist venues.
Future uses for edge data centers will come from industrial companies doing process automation; oil, gas and mining exploration; retail businesses seeking security and standardization for branch offices (often after a merger); and retail business preparing for augmented/virtual reality applications such as smart mirrors, smart shelves and smart kiosks, Carlini said.
"We are starting to see customers who are experiencing latency and doing more and more things on-site, but it is not as broad-based as it will be," he said.
As cloud computing architecture proliferates, there is increased need for local stacks on-site, noted Carlini, who has an electrical and electronics engineering degree. "It is a return to on-prem data services, even if it is cloud services."
Enterprises evaluating a move to an edge data center should start by looking at what the data is doing. For a financial services company, for example, a move to Wyoming to save money may not meet the company's performance requirements, Dreher said.
Put another way: What company would want to move further away from its customers?
Internet and WAN slowdowns at certain times -- such as when Adele's newest album 25 was released and when Star Wars: The Force Awakenstickets went on sale -- create poor performance for everyone, forcing some companies to rethink their data center strategy and explore other options. Increasingly this will include an edge data center, explained Carlini.
"Is having everything in the cloud, in a centralized data center somewhere, really where I want to be?" he asked.