Translate

Sunday, February 17, 2013

Facebook Opens Up Hardware World With Magic Hinge


Amir Michael, the engineer who leads Facebook's hardware design team.
Photo: Jon Snyder/Wired
Imagine that your laptop display weighs 800 pounds. But you can still open it and close it and re-open it as you like, gently pushing it to just the right angle. And when you let go, it stays exactly where you put it.
That should give you a pretty good idea of what Facebook has done in designing an entirely new breed of hardware device for storing all the photos, videos and other digital stuff uploaded by its more than 845 million users.
Inside its massive data centers, Facebook stores about 100 petabytes of photos and videos alone — aka 100 million gigabytes — and as users upload more digital stuff with each passing day, the social networking giant is intent on moving all that data onto custom-designed hardware that seeks to reduce costs and streamline both upgrades and repairs by stripping storage down to the bare essentials. Codenamed “Knox,” Facebook’s storage prototype holds 30 hard drives in two separate trays, and it fits into a nearly 8-foot-tall data center rack, also designed by Facebook.
The trick is that even if Knox sits at the top of the rack — above your head — you can easily add and remove drives. You can slide each tray out of the the rack, and then, as if it were a laptop display, you can rotate the tray downwards, so that you’re staring straight into those 15 drives. Equipped with a “friction hinge” that supports up to 800 pounds, the tray freely moves up and down when you apply a little force, but when you let go, it stays exactly where you put it.
“If I leave it there, it stays there,” says Amir Michael, the man who leads the engineering team that designed the Knox prototype. “It won’t come down on my head.”
Facebook's "Knox" storage device includes a "friction hinge" not unlike the one in your laptop.
Photo: Jon Snyder/Wired
You might say this is a small thing. But when you consider the volume of data stored at Facebook, such small improvements can have a very large effect. And in this, the internet age, Facebook isn’t the only one struggling to cope with epic amounts of online data. Yes, Google, Amazon and Microsoft face similar problems, but so do financial houses, oil and gas companies, and biomedical outfits.
In an effort to reduce power and cost and hassle across the infrastructure that underpins its sweeping online operation, Facebook is designing its own data centers and servers as well as its own racks and storage gear, and it’s openly sharing these designs with the rest of the world, hoping that others will help improve the designs, install them in their own data centers, and ultimately drive down costs even further.
Facebook’s Knox prototype was revealed on Wednesday in San Antonio, Texas, during a mini-conference bringing together members of the Open Compute Project, the consortium Facebook created to promote the use of its “open source” hardware designs — and encourage others to share new designs of their own. When the project was first announced, many questioned whether it would fly, arguing that only a small number of web companies needed anything other than the off-the-shelf hardware sold by the likes of Dell, HP and IBM. But in little more than a year, Facebook has built a thriving community of big-name outfits intent on improving even the tiniest aspects of the world’s mega data centers.
On the Knox tray, a "push" button opens up each drive cage. But with future versions, this will give way to, yes, a "poke" button.
Photo: Jon Snyder/Wired
At Wednesday’s conference, Intel and AMD were set to open source specifications for two new server motherboards, while Dell and HP were due to reveal new server products that slide into the same rack design Facebook built for use with its Knox storage device. Dell and HP are not open sourcing these servers, but in building machines specifically for Facebook’s “Open Rack,” the two tech giants are actively feeding this effort to overhaul data center hardware. AMD and HP only just joined the Open Compute Project, and on Wednesday, Facebook welcomed several others into the fold, including two notable manufacturers — Samsung and Quanta — and two big-name web operations whose businesses depend so heavily on data center gear — Salesforce.com and Tencent, China’s largest website.
The aim of the project, says Frank Frankovsky, the ex-Dell man who oversees hardware group at Facebook and serves as point man for the Open Compute Project, is not only to improve hardware in the data center, but to do so in way everyone can benefit from. Web giants such as a Google and Amazon already use custom-built gear, and they’re streamlining their supply chains by purchasing this gear straight from manufacturers in Taiwan and China. But they treat their designs like trade secrets, viewing them as a competitive advantage best kept hidden from the rest of the world. Ultimately, Frankovsky believes, you can streamline the process even more if everyone shares their designs.
“The Open Compute Project is really about bringing together a convergence of voices,” he says. And other members of the project agree. Though Knox was designed by engineers at Facebook, the project was officially chaired by Cole Crawford, the director of technology at Nebula, a Silicon Valley startup that sells a hardware system for build Amazon-like cloud services, and according Crawford, the prototype was built with input from the larger community. “As a community member,” he says, “you are absolutely empowered to give your thoughts and ideas.”
With the Knox prototype, engineers can "hot-swap" individual hard drives without tools.
Photo: Jon Snyder/Wired
Servers in Pieces
When Facebook first launched the Open Compute Project in the spring of 2011, it open sourced the designs for its new data center in Prineville, Oregon and the servers built for use inside the facility. These “vanity-free” machines were specifically designed to reduce power consumption, ease upgrades and repairs, and, yes, drive down the cost of the hardware itself. But in many ways they still looked like traditional servers. There was a CPU and a hard drive and a power supply.
But, working in tandem with other Open Compute members, Facebook is now moving towards a setup that breaks the traditional server into pieces.
With its Open Rack design, the company has widened the inside of a traditional server rack from 19-inches to 21, believing this is far more suitable to modern computing hardware. But Amir Michael and crew have also equipped the rack with hardware that can accommodate its own power supplies. With the power supplies in the rack, you needn’t add a supply to every server. “You don’t have to embed a new power supply every time you install a new CPU,” Frankovsky says.
At the same time, the company separating hard drives from servers. With the Knox prototype, Facebook can stuff up to thirty drives into a single storage device, and then connect these devices to separate CPU-equipped motherboards in the same rack or — with the help of slightly longer cables — an entirely separate rack.
With a separate Open Compute sub-project — known as Virtual I/O — Facebook and other companies are designing a new protocol that would will help separate still more servers parts. You could put, say, CPUs in one place, and memory in another. As Frankovsky points out, CPUs require an upgrade far more often than other parts of the server. If you separate the CPUs from everything else, you save costs merely upgrading parts less often.
As it stands, Facebook has merely built the rack and the storage device — and these are still under development. But Dell and HP have already built power-supply-less servers for use with Facebook’s Open Rack — at least according to Facebook and Frankovsky. Dell and HP did not immediately respond to a request for comment. But these prototypes are known as “Coyote” and “Zeus” respectively, and for Frankovsky, they show how valuable Open Rack can be. The ultimate aim, he says, to create a “Hardare API” for the data center, an interface that devices from any vendor can easily plug in to and “just work.”
A short version of Facebook's Open Rack prototype, which can hold up to seven power supplies -- so you needn't install them on individual servers and storage devices.
Photo: Jon Snyder/Wired
Dell and HP Toe the Line
In designing new hardware for its data centers, Facebook is essentially cutting the Dells and the HPs out of its supply chain. Typically, the hardware sold by the likes of Dell and HP is built by “original design manufacturers,” or ODMs, in Taiwan and China, and Facebook is using some of these same ODMs to manufacture its custom-built gear.
But Dell has always said it supported Facebook’s effort to share new hardware designs with the world at large — even though this could mean others bypassing Dell in much the same way. Dell even holds a spot on the board of the Open Compute Project.
It’s never been clear, however, how Dell would benefit from Open Compute. As recently as last month, Tim Mattox — Dell’s vice president of strategy — told us that the company’s role was unclear because it was unclear how many companies will actually use Facebook’s open source designs. “[Facebook] is trying to bring the industry’s best minds together and get them to think about how we can make hardware better for everyone, and we want to be a participant in that,” he said. “But a lot of times, it’s not clear where these things will go. What they’re producing is only going to be applicable to a certain niche, and we’ll have to see how big the niche is.”
In producing servers for Open Rack, both Dell and HP have apparently realized that this “niche” as an important one. And they should. Others members of the Open Compute Project include Texas-based cloud computing outfit Rackspace and Japanese telecom NTT. Big name financial outfits Goldman Sachs and Fidelity are not just members but active participants. And though Amazon and Apple haven’t officially joined the project, both had representatives at the last Open Compute Summit in November.
Goldman Sachs is leading an OCP effort to build a common means of managing servers spread across your data center, and according to Frankovsky, Goldman and Fidelity worked hand-in-hand with AMD on the motherboard that the chip designer was due to unveil at Wednesday’s summit.
Both AMD and Intel are open sourcing specifications for server motherboards meant to fit into traditional server racks — as opposed to the new-age racks built by Facebook. AMD declined to comment on the designs prior to Wednesday’s event, but in a brief conversation, Intel vice president Jason Waxman said that Intel’s spec was meant as a blueprint for building servers that fit into both Open Rack and traditional racks. With these two prototypes — “Roadrunner” and “Decathlete” — the idea is to provide an open source motherboard design that can be used in tandem with older hardware. Facebook’s designs often require an overhaul of a data center’s entire design.
Of course, that’s primary aim of Facebook’s project. Unhappy with the server, storage, and rack hardware that’s currently available from traditional sellers, the company is trying to rebuild everything from the ground up. According to Peter Krey, a consultant who advises the CIOs and CTOs of multiple Wall Street firms as they build “cloud” infrastructure inside their data centers, part of the appeal of the Open Compute Project is that it takes a “holistic approach” to the design of data center hardware. “The traditional data center design…s Balkanized,” Krey recently told us. “[But] the OCP guys have designed and created all the components to efficiently integrate and work together.”
The new motherboard specs from Intel and AMD provide an alternative to Facebook’s “holistic” design, but at the same time, they seek to bring the ethos behind Facebook’s effort to a whole new set of companies. Like Facebook, the two chip designers are trying to improve on existing hardware — and foster additional improvements by sharing their designs with the world at large.
Facebook hardware man Frank Frankovsky outside the company's new HQ -- aka the former home of one-time hardware giant Sun Microsystems.
Photo: Jon Snyder/Wired
Supply Unchained
The designs shared under the aegis of the OCP aren’t always as “open” as they could be. In some cases, the designs require proprietary technology from the manufacturers Facebook partners with. But Facebook is already using multiple manufacturers for its designs — Taiwanese ODMs Quanta and Wistron — and the aim is to create a world where companies can buy the same hardware from multiple sources.
This is still a ways away. But multiple companies are now lining up to sell hardware based on Open Compute designs. Hyve — a new division of Synnex, an outfit that spent the last 30 years buying and selling computers, hard drives, chips, memory, and all sorts of other hardware — is already selling OCP gear, and on Wednesday, other companies — including Quanta and the New Jersey-based ZT Systems — will announce they’re intention to sell gear as well.
According to Frankovsky, both Quanta and Wistron are creating brand new American divisions that will sell directly to end users. In the past, these ODMs merely manufactured hardware for the likes of Dell and HP, who then sold it on to companies like Facebook. But thanks to the Open Compute Project, the supply chain is shrinking. On the face of it, this seems like more bad news for Dell and HP. But Frankovsky is adamant that Open Compute should not be seen as a replacement for the traditional server sellers. He believes that soon, Dell and HP will sell their own Open Compute gear. “This is about how all consumers and suppliers can create a more efficient market.”