Translate

Sunday, February 17, 2013

5 tren data center di 2013

5 Data Center Trends For 2013
Energy efficiency will continue to be a major focus of data center operations over the coming year, but that's not all we'll see.
As 2013 opens with new prospects for data center operations, we'll see new looks at some old themes, especially around energy efficiency. Increased power costs and pressure from environmental groups will lead data center designers to look to new technologies to cut their traditional energy needs. But that's not all we'll see; here are five important trends you can expect to see gain strength in 2013.
1. Location Drives Energy Efficiency AdTech Ad
There is one data center concern that overwhelms all others: the need for energy efficiency. At one time, energy costs were viewed as a given, compared to the expenses in hardware purchases and labor for operations. But as hardware became more efficient and automated procedures more prevalent, the cost of energy has steadily risen to capture 25% of total operating costs, and it now sits close to the top of the list.
In addition, there is a clash building between environmentalists versus smartphone and tablet users and data center operators. As the evidence builds for global warming, the unbridled growth of computing in many forms is coming under attack as a wasteful contributor to global warming. Indeed, such an attack was the theme of a landmark New York Times story published Sept. 22, "The Cloud Factories: Power, Pollution and the Internet."
[ Our analysis of the New York Times story was one of InformationWeek's top 12 stories of 2012. Catch up on the other 11 at Best Of InformationWeek 2012: 12 Must-Reads. ]
This clash will take place even though data center builders are showing a remarkable ability to reduce the amount of power consumed per unit of computing executed. The traditional enterprise data center uses just under twice as much electricity as it needs to do the actual computing. The extra amount goes to run cooling, lighting and systems that sustain the data center.
A measure of this ratio is PUE, or power usage effectiveness. An ideal PUE would be 1.0, meaning all the power brought to the data center is used for computing -- probably not an achievable goal. But instead of 2.0, Google showed it could build multiple data centers that operated with a PUE of 1.16 in 2010, reduced to 1.14 in 2011.
Each hundredth of a point cut out of the PUE represents a huge commitment of effort. As Jim Trout, CEO of Vantage Data Centers, a wholesale data center space builder in Santa Clara, Calif., explained, only difficult gains remain. "The low-hanging fruit has already been picked," he said in an interview.
Nevertheless, Facebook illustrated with its construction of a new data center in Prineville, Ore., that the right location can drive energy consumption lower. The second-biggest energy hog, just below electricity used in computing, is power consumed for cooling. Facebook built an energy-efficient data center east of the Cascades and close to cheap hydropower. By using a misting technique with ambient air, it can cool the facility without an air conditioning system.
It drove the PUE at Prineville down to 1.09, but a Facebook mechanical engineer conceded few enterprise data centers can locate in the high, dry-air plains of eastern Oregon, where summer nights are cool and winters cold. "These are ideal conditions for using evaporative cooling and humidification systems, instead of the mechanical chillers used in more-conventional data center designs," said Daniel Lee, a mechanical engineer at Facebook, in a Nov. 14 blog.
Most enterprise data centers remain closer to expensive power and must operate year round in less than ideal conditions. Facebook also built a new data center in Forest City, N.C., (60 miles west of Charlotte) where summers are warm and humid, attempting to use the same ambient air technique. To Lee's surprise, during one of the three hottest summers on record, the misting method worked there as well, although at higher temperatures and humidity. Instead of needing 65-degree air, it will operate at up to 85 degrees. And instead of having a maximum 65% relative humidity, it can function with 90%. That most likely resulted in the need to increase the flow of fan-driven air. Nevertheless, a conventional air-conditioning system with its power-hungry condensers would have driven the Forest City PUE far above the Prineville level.
The equipment and design used to achieve that PUE are available for all to see. In 2011, Facebook initiated the Open Compute Project, with the designs and equipment specifications of its data centers made public. Both Prineville and Forest City follow the OCP specs.
Thus, Facebook has set a standard that is likely to be emulated by more and more data center builders. In short, 2013 will be the year when the Open Compute Project's original goal is likely to be put into practice: "What if we could mobilize a community of passionate people dedicated to making data centers and hardware more efficient, shrinking their environmental footprint?" wrote Frank Frankovsky, Facebook's director of hardware design and supply chain, in a blog April 9.
Google is another practitioner of efficient data center operation, using its own design. For a broad mix of older and new data centers, it achieved an overall PUE of 1.14 in 2011, with a typical modern facility coming in at 1.12, according to Joe Kava, VP of Google data centers, in a March 26 blog. In 2010, the overall figure was 1.16.
2. Natural Gas Gains Steam
Beyond reducing the amount consumed, there's another energy issue looming in data center operations. There's usually little debate over what type of energy to use: electricity purchased off the grid is nearly everyone's first choice.
But the U.S. is currently experiencing a glut of natural gas, drilled from underground shale formations in South Dakota, Pennsylvania and the Appalachian states. Gas is at its lowest prices in years, due to the oversupply. And a few companies are poised to take advantage of it through use of onsite generators burning natural gas to supply all their power needs.
Datagryd is one of them, in its 240,000-square-foot data center at 60 Hudson Street in Midtown Manhattan. CEO Peter Feldman said in an interview that not only can he generate electricity with the nation's cleanest fuel, but his firm has designed a cogeneration facility where the hot gases in the generators' exhaust drive a cooling system for his data center. When he's got surplus power, it can be sold to New York City's Con Edison utility.
As California and other states take up the possibility of allowing drilling for natural gas, Datagryd's example may become a pattern in future large data center operations. The ability to generate the electricity needed from fuel delivered by underground pipeline had its advantages when Hurricane Sandy hit New York and New Jersey. While generators at other nearby data centers ran out of fuel and sputtered to a stop, Datagryd continued delivering compute services to its customers; it didn't need to get diesel trucks over closed bridges or through blocked tunnels. It continued functioning throughout the crisis, Feldman said.
3. Rise Of The Port-A-Data-Center
Speaking of Hurricane Sandy, another alternative type of data center located in Dulles, Va., bore the brunt of Sandy's impact without going down. It was AOL's outdoor micro data center that stands in a module roughly equal to a Port-a-Potty. The modules are managed remotely, so if storm winds knocked the structure down, there was no one onsite to set things right.
The unit sits on a concrete slab and contains a weather-tight rack of servers, storage and switching. Power is plugged into the module, the network connected and water service installed, since hot air off the servers is used to warm water in a heat exchanger. The water is then cooled outside the module by ambient air. The water is in a closed-loop piping system, and its temperature can rise to as high as 85 degrees without the cooling system failing to do its job.
The design of the system brings the power source close to the servers that are going to use it. The water- and fan-driven cooling system requires little energy compared to air conditioning units. And there's no need for lights or electrical locking mechanisms, as a glasshouse data center typically has. The combination gives the micro data center a potential PUE rating of 1.1, according to spokesmen from Elliptical Mobile, which produces the units.
"We experienced no hardware issues or alerts from our network operations center, nor did we find any issues with the unit leaking," said Scot Killian, senior technology director of data center services at AOL in a report by Data Center Knowledge on Nov. 28.
AST Modular is another producer of micro data centers. These modules may soon start to serve as distributed units of an enterprise's data center, placed in branch offices, distributed manufacturing or locations serving clusters of small businesses. AOL is in an experimental phase with its module and hasn't stated how it plans to make long-term use of them in its business.
4. DTrace Makes Data Centers More Resilient
Data centers of the future will have many more built-in self-monitoring and self-healing features. In 2013, that means DTrace, an instrumentation and process-triggered probe into how well a particular operating system and application work together.
DTrace is a feature that first came out in Sun Microsystems' Solaris in 2005, then gradually became available in FreeBSD Unix, Apple Mac OS X and Linux. The Joyent public cloud system makes extensive use of it to guarantee performance and uptime through its SmartOS operating system, based on open source Illumos, (a variant of Solaris).
Developers and skilled operators can isolate any running process that they wish and order a snapshot of the CPU, memory and I/O it uses, along with other characteristics, through a DTrace script. The probe is triggered by the execution of the targeted process.
Twitter has used DTrace to identify and eliminate a Ruby on Rails process in Twitter's systems that was slowing operations by generating back traces in Twitter systems that were hundreds of frames long, tying up large amounts of compute power without producing beneficial results.
Jason Hoffman, CTO of Joyent, said in a recent interview that effective use of DTrace yields large amounts of data that can be analyzed to determine what goes wrong, when it goes wrong and how to counteract it. The Joyent staff is building tools to work with DTrace in this fashion and provide a more resilient cloud data center, he said.
5. New Renewable Energy Forms
The previously mentioned New York Times story panning the rapid build-out of data centers didn't consider a new possibility. The data centers of the future will not only consume less power per unit of computing done; they will also in some cases be built next to a self-renewing source of local energy -- yielding a net zero of carbon fuel consumption. There are many prime candidates for renewable power generation around the world from wind, solar, hydro-electric or geo-thermal, but most are too remote to become cost-effective suppliers to power grids. It's simply too expensive to build a transmission line to carry the amount of current that they can generate to the grid.
But data centers built near such sources could consume the power by bringing the data they're working with to the site, instead of bringing power to the data. Such a site would require only a few underground fiber optic cables to carry the I/O of the computer operations to customers. Facebook found Prineville, Ore., a suitable site for large data center operations; Google and cloud service providers are believed to be building early models of data centers relying on self-renewing energy sources. Microsoft is experimenting with a data center fueled by biogas from a wastewater treatment facility. Some enterprises may experiment with micro-data centers placed near a self-renewing energy source, such as a fast-flowing stream, sun-baked field or wind site.
Swift-flowing streams from glacier melt in Greenland and melting snows in Scandinavia have been chosen as sites for building prototypes of data centers built at self-renewing energy locations.