Friday, May 27, 2011

Facebook Open Sources Its Servers and Data Centers

By Stacey Higginbotham
Apr. 7, 2011, 10:05am PT

Facebook has shared the nitty-gritty details of its server and data center design, taking its commitment to openness to a new level for the industry by sharing its infrastructure secrets much like it has shared its software code. The effort by the social network will bring web scale computing to the masses and is a boon for AMD and Intel and the x86 architecture. Sorry ARM.

At a news event today Facebook is expected to release a server design that minimizes power consumption and cost while delivering the right computer workload for a variety of tasks that Facebook does. Unlike Google, which is famous for building its own hardware and keeping its infrastructure advantage close to its vest, Facebook is sharing its server design with the world. Much of the approach mirrors the scaled-down ethos of massive hardware buyers that requires stripped down boxes without redundant power supplies that have hot swappable drives to make repairs and upgrades easier.

But Facebook has added some innovations, such as newer fans that are larger (the entire server is 50 percent taller than the traditional 1 u sized box) and fewer of them (a design tweak introduced by Rackable, which is now SGI). Those fans account for 2 percent to 4 percent of energy consumption per server, compared to industry average of 10 percent to 20 percent. Ready for more? Here are more key details on the server-side:

>>The outside is 1.2mm zinc pre-plated, corrosion-resistant steel with no front panel and no ads.

>>The parts snap together: the motherboard snaps into place using a series of mounting holes on the chassis, and the hard drive uses snap-in rails and slides into the drive bay. The unit only has one screw for grounding. It’s like Container Store does cheap servers and someone at Facebook built an entire server in three minutes.

>>Hold onto your chassis because the server is 1.5u tall about 50 percent taller than other servers to make room for larger and more efficient heat sinks.

>>Check out how this scales. It has a reboot on LAN feature, which lets a systems administrator instantly reboot a server by sending specific network instructions.

>>The motherboard speaker is replaced with LED indicators to save power and provide visual indicators of server health.

>>The power supply accepts both A/C and D/C power, allowing the server to switch to D/C backup battery power in the event of a power outage.

>>There are two flavors of processor with the Intel motherboard offering two Xeon 5500 series or 5600 series processors, up to 144GB memory and an Intel 5500 I/O Hub chip.

>>AMD fans can choose two AMD Magny-Cours 12 and 8 core CPUs, the AMD SR5650 chipset for I/O, and up to a maximum 192GB of memory.

But wait! There’s more. Facebook couldn’t just unleash its server plans to the market. The social networking site has also shared its data center designs to help other startups working at webscale build out their infrastructure in a manner that consumes as little power as possible. Yahoo has also shared its data center plans, with special attention going to its environmentally friendly chicken coop designs and Microsoft has built out a modular data center concept that allows it to build a data center anywhere in very little time.

Facebook has combined that approach in its Prineville, Ore. where it has spent two years developing everything that goes inside its data centers — from the servers to the battery cabinets to back up the servers — to be as green and cheap as possible. For example, Facebook uses fewer batteries thanks to its designs and to illustrate how integrated the whole compute operation is, the house fans and the fans on the servers are coupled together. Motion-sensitive LED lighting is also used inside.

The result is a data center with a power usage effectiveness ratio of 1.07. That compares to an EPA-defined industry best practice of 1.5, and 1.5 in Facebook’s leased facilities.


Some of the server design decisions allow the equipment to run in steamier environments (the Prineville facility runs at 85°F with a 65 percent relative humidity) which in turns lets Facebook rely on evaporative cooling instead of air conditioning. Other innovations are at the building engineering level such as using a 277 volt electrical distribution system in place of the standard 208 volt system found in most data centers. This eliminates a major power transformer, reducing the amount of energy lost in conversion. In typical data centers, about 22 to 25 percent of the power coming into the facility is lost in conversions. In Prineville, the rate is 7 percent.

In the waste not want not category, Facebook is using the warm air from the servers to heat the outside air when it’s too cold as well as the offices. In the summer the data center will spray water on incoming warm air to cool it down. It’s also designed its chassis and servers to fit precisely into shipping containers to eliminate waste in transport. The plan is to run those servers as hard as it can, so it doesn’t have to build out more infrastructure.

The social network has shared the server power supply, server chassis, server motherboard, and the server cabinet specifications and CAD files as well as the battery backup cabinet specification and the data center electrical system and mechanical specification. While not every startup needs to operate at webscale, the designs released by Facebook today certainly will give data center operators as well as the vendors in the computing ecosystem something to talk about. Infrastructure nerds, enjoy.

For more on green data centers check out our Green:Net event on April 21 where we’ll have infrastructure gurus from Google and Yahoo talking about their data center strategies.

Source

No comments: