Appro and San Diego Supercomputer Center Launches Trestles, The Nation’s Largest Open-Access Scientific Discovery Infrastructure
Trestles Supercomputer Targets High Productivity for Users
Milpitas, CA (Vocus/PRWEB) March 03, 2011
Appro (http://www.appro.com), a leading provider of supercomputing solutions, announces the deployment of an innovative high performance computer (HPC) system named “Trestles” by the San Diego Supercomputer Center (SDSC) at UC San Diego. The system is based on Quad-socket, 8-Core AMD Opteron compute nodes connected via a QDR InfiniBand Fabric configured by SDSC and Appro. This project was the result of a $2.8 million award from the National Science Foundation (NSF).
Trestles is now available to users of the TeraGrid, the nation’s largest open-access scientific discovery infrastructure. The system is among the five largest in the TeraGrid repertoire, with 10,368 processor cores, a peak speed of 100 teraflop/s, 20 terabytes memory, and 38 terabytes of flash memory. One teraflop (TF) equals a trillion calculations per second, while one terabyte (TB) equals one trillion bytes of information. Debuting at #111 on the top 500 list of supercomputers in the latest ranking, Trestles will work with and span the deployments of SDSC’s recently introduced Dash system and a larger data-intensive system, the Appro Xtreme-XÃ¢“¢ Supercomputer, named”Gordon” by SDSC to become operational in late 2011.
All three SDSC systems employ flash-based memory, which is common in much smaller devices such as mobile phones and laptop computers but unique for supercomputers, which generally use slower spinning disk technology.
“Trestles is appropriately named because it will serve as a bridge between SDSC’s unique, data-intensive resources available to a wide community of users both now and into the future,” said Michael Norman, SDSC’s director.
“UCSD and SDSC are pioneering the use of flash in high-performance computing,” said Allan Snavely, associate director of SDSC and a co-PI for the new system. “Flash disks read data as much as 100 times faster than spinning disk, write data faster, and are more energy-efficient and reliable.”
“Trestles, as well as Dash and Gordon, were designed with one goal in mind; to enable as much productive science as possible as we enter a data-intensive era of computing,” said Richard Moore, SDSC’s deputy director and co-PI. “Today’s researchers are faced with sifting through tremendous amounts of digitally based data, and such data-intensive resources will give them the tools they need to do so.” Moore added that that Trestles offers modest-scale and gateway users rapid job turnaround to increase researcher productivity, while also being able to host long-running jobs. Speaking of speed, SDSC and Appro brought Trestle into production in less than 10 weeks from initial hardware delivery. “We committed to getting the system in the hands of our users and meeting NSF’s production deadline,” noted Moore.
Early User Successes
Early users of SDSC’s Trestles include Bridget Carragher and Clint Potter, directors at the National Resource for Automated Molecular Microscopy at The Scripps Research Institute in La Jolla, Calif. Their project focuses on establishing a portal on the TeraGrid for structural biology researchers to facilitate electron microscopy (EM) image processing using the Appion pipeline, an integrated database-driven system.
“We are very excited about this early opportunity to use the Trestles infrastructure for high performance structural biology projects,” said Carragher. “Based on our initial experience, we are optimistic that this system will have a dramatic impact on the scale of projects we can undertake, and on the resolution that can be achieved for macromolecular structure.”
To ensure that productivity on Trestles remains high, SDSC will adjust allocation policies, queuing structures, user documentation, and training based on a quarterly review of usage metrics and user satisfaction data. Trestles, along with SDSC’s Dash and Triton Resource clusters use a matrixed pool of expertise in system administration and user support, as well as the SDSC-developed Rocks cluster management software. SDSC’s Advanced User Support has already established key benchmarks to accelerate user applications, and subsequently will assist users in tuning and optimizing applications for Trestles. Full details of the new system can be found on the Trestles webpage.
Trestle’s policies are designed to meet the needs of that growing user base. NSF’s award to build and deploy Trestles was announced last August by SDSC, and Trestles will be available to TeraGrid users through 2013. In November 2009, SDSC announced a five-year, $20 million grant from the NSF to build and operate Gordon, the first high-performance supercomputer to employ a vast amount of flash memory. Dash, a smaller prototype of Gordon, was deployed in April 2010. All these systems are being integrated by Appro and use a similar design philosophy of combining commodity parts in innovative ways to achieve high-performance architectures.
Appro is a leading developer of supercomputing solutions. Appro is uniquely positioned to support High Performance Computing markets focusing on medium to large-scale deployments. Appro accelerates technical data-intensive applications for faster business results through outstanding price/performance, balanced architecture coupled with latest technologies, open standards and engineering expertise. Appro headquarters is in Milpitas, CA with offices in Korea and Houston, TX. To learn more go to http://www.appro.com
As an Organized Research Unit of UC San Diego, SDSC is a national leader in creating and providing cyberinfrastructure for data-intensive research, and celebrated its 25th anniversary in late 2010 as one of the National Science Foundation’s first supercomputer centers. Cyberinfrastructure refers to an accessible and integrated network of computer-based resources and expertise, focused on accelerating scientific inquiry and discovery. SDSC is a founding member of TeraGrid, the nation’s largest open-access scientific discovery infrastructure.
For the original version on PRWeb visit: http://www.prweb.com/releases/prweb2011/3/prweb8174462.htm