Clustered Systems Installs “Hyper Efficient” 100kW Rack At SLAC National Accelerator Laboratory
SANTA CLARA, Calif., May 8, 2013 /PRNewswire-USNewswire/ — Clustered Systems recently completed the installation of its first “hyper efficient” computing system at SLAC National Accelerator Laboratory. Initial testing indicates that an average PUE* of 1.07 can be expected, which resets the bar on server energy efficiency. (See video report here)
The high-density, high-performance system, located in the LCLS (Linac Coherent Light Source) facility, comprises 128 servers in an 800mm wide 48U rack containing four chassis with built in cooling and a 105kW n+2 redundant power supply. Each chassis cools the servers directly via a series of cold plates through which refrigerant, is pumped. Cooling capacity is up to 20 kW per 8U chassis, with up to 5 fitting into a standard IT rack. For LCLS, the space for the fifth chassis was allocated to a high-performance network switch.
The rack is installed in the SLAC facility in an unconditioned room just off a loading dock. “This is HPC, Cloud or even a Data Center in a box,” said Phil Hughes, CEO of Clustered Systems. “A user can put a system anywhere there is power. No special facilities are required. We have calculated that capital expense can be reduced by up to 50% and total energy consumption by 30%. More investment can go into compute and not have to be shared with bricks and mortar.”
“To be able to pack such computing power into such a small space is unprecedented,” said Amedeo Perazzo, Department Head Controls and Data Systems, Research Engineering Division at SLAC.
The new chassis establishes a standard form factor for high-performance IT blade servers from numerous manufacturers including those specified by the Open Compute Consortium. A consortium of industry leaders including Intel, Emerson Network Power, Panduit, OSS (One Stop Systems, Inc.) and Smart Modular, Inc. contributed to the design.
Rack Power and Cooling
Clustered Systems’ new server rack can provide 105KW of power and cooling for 80 blades distributed among five 8U chassis. It is designed as an open system and each chassis can be populated in front with 16 compute, networking or storage blades and four removable switching blades at the rear. All electronic components including backplanes are field replaceable. “The cooling architecture provides a unique opportunity for adopters to design to a consistent form factor and meet the needs of one or hundreds of customers,” said Hughes. “One company is developing a version for GPUs which will enable over 300 TeraFLOPS to be achieved in a single rack.”
While a chiller for cooling water is unnecessary in SLAC’s location, one is used to facilitate testing at different temperatures. The infrastructure overhead includes 480VAC to 380VDC power conversion losses, pumps and heat dissipation components.
The system will be used for developing computational methods to tailor new catalysts and for simulations of the interaction of X-rays with matter. IIT load and IT and cooling power will be in logged order to assess the efficiency of the cooling system.
Configured for High Performance Computing, a rack houses up to 80 front blades in five enclosures and, for the first time, uses PCI express as the system interconnect.
The rear blades house a PCI express switch matrix. Additional, external PCIe switches connect the individual enclosures. This arrangement permits both inter-processor communication and peripheral sharing. “We are the first company to conceive and build a Disaggregated Rack,” said Robert Lipp, Clustered’s CTO. “Thanks to our very high density, which is four times that of a standard rack, we did not need to wait for optical interconnect.”
About Clustered Systems Company, Inc.
Clustered Systems (www.clusteredsystems.com ) is a privately-owned company specializing in innovations for system cooling and switching. They are the developer of a revolutionary cold plate cooling system for 1U servers. It was recognized as being the most energy efficient cooling system available in a series of tests performed by Lawrence Berkeley Labs under the aegis of the Silicon Valley Leadership Group California Energy Commission. The system has been licensed to Emerson Network Power and is available through Emerson’s Liebert representatives worldwide. Clustered is now focusing on the next generation of cooling systems for high density, high-efficiency compute arrays.
About SLAC National Accelerator Laboratory
SLAC is a multi-program laboratory exploring frontier questions in photon science, astrophysics, particle physics and accelerator research. Located in Menlo Park, California, SLAC is operated by Stanford University for the U.S. Department of Energy Office of Science. To learn more, please visit www.slac.stanford.edu.
DOE’s Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit www.science.energy.gov.
* Note to Editors:
PUE, Power usage efficiency is defined as the ratio of total load to IT load. The total load is made up mainly of power conversion losses and cooling mechanical load plus IT load. IT load is measured in some cases at the UPS output and others at the server power inlet. As most servers on the market today are air cooled, fans are required in the server enclosures to move the air. These fans are also considered part of the IT load rather than as part of the cooling infrastructure.
If the server fans’ power consumption is considered as infrastructure load subtracted from the IT consumption the PUE is significantly increased.
Claimed PUE = 1.10 and Fan power is 5% of IT load.
Fanless PUE = 1.10/(1-0.05) = 1.16
As Clustered’s system has no fans, its fanless PUE is 1.07 (no change).
Contact: Phil Hughes
Phone: (415) 613-9264
SOURCE Clustered Systems