ASCR Monthly Computing News Report - June 2010
Jaguar Simulates the Molecular Machines That Replicate and Repair DNA
Ivaylo Ivanov of Georgia State University, John Tainer of the Scripps Research Institute, and J. Andrew McCammon of the University of California–San Diego, used Jaguar, a Cray XT high-performance computing system at Oak Ridge National Laboratory (ORNL), to elucidate the mechanism by which accessory proteins called sliding clamps are loaded onto DNA strands and coordinate enzymes that enable gene repair or replication. They shared their findings, which inspire a new approach for attacking diverse diseases, in the May 10 issue of the Journal of the American Chemical Society.
“This research has direct bearing on understanding the molecular basis of genetic integrity and the loss of this integrity in cancer and degenerative diseases,” says Ivanov, whose investigation was supported by the Howard Hughes Medical Institute and the National Science Foundation’s Center for Theoretical Biological Physics. In 2009 the researchers were awarded 2.6 million processor hours through INCITE, the Innovative and Novel Computational Impact on Theory and Experiment program, which provides pioneering scientists and engineers with access to the Department of Energy’s leadership computing facilities at Oak Ridge and Argonne national laboratories. To further the investigation, in 2010 the researchers received a two-year allocation of 4 million processor hours on Jaguar XT5.
Nuclear Theorists Use Jaguar to Pin Down the Proton-Halo State in Fluorine-17
A halo may be difficult to acquire in terms of virtue, but it can also be tough to calculate in terms of physics. Gaute Hagen from ORNL, Morten Hjorth-Jensen from the University of Oslo, and Thomas Papenbrock from the University of Tennessee, Knoxville (UTK), have managed to do just that, however, and report their findings in the article “Ab-initio computation of the 17F proton-halo state and resonances in A = 17 nuclei,” published in May in Physical Review Letters.
The ORNL-Oslo-UTK team developed and implemented sophisticated theoretical and computational methods to solve the nuclear many-body problem — it is difficult to pin down precise calculations of a system with more than two interacting bodies — for the 17 interacting particles in the Fluorine-17 isotope. The team used first-principle (ab initio in Latin) interactions derived from quantum chromodynamics, which describes the strong interactions between elementary particles, to build the nuclear Hamiltonian, the operator that describes the energy of a system in terms of its momentum and positional coordinates. They used the coupled-cluster method — a numerical technique that solves such quantum many-body problems — and ORNL’s Jaguar supercomputer to successfully complete the ab initio calculations of the proton halo state in fluorine-17. The researchers used nearly 100,000 processor hours on Jaguar to carry out the calculations, and showed a computed binding energy (what holds the nucleus together) that closely reflects experimental data.
Global Arrays Toolkit Version 4.3 Released
A new version of the award-winning Global Arrays (GA) toolkit software was recently released. The GA team at Pacific Northwest National Laboratory has released version 4.3 of the programming model software. GA significantly simplifies writing code for supercomputers and helps scientists to translate their ideas into highly efficient software that allows mathematical computations to run independently using subsets of processors of the supercomputer. Version 4.3 includes the following features:
- Scalability up to 200K processes (Gordon Bell Finalist at SC 2009 by Apra et al.)
- Optimized port for leadership class machines (e.g., Cray XT5) and Linux clusters
- Support for sparse data operations (see GA user manual chapter 11 for details)
As part of the ACM Gordon Bell Prize finals at SC’09, a GA-based parallel implementation of coupled cluster calculations performed at 1.39 petaflops using more than 223,000 processes on ORNL’s Jaguar petaflop Leadership-Class system. (Aprà et al., “Liquid water: obtaining the right answer for the right reasons,” SC’09.) The software is available for download from the GA website
, and a user mailing list is available to help facilitate discussions about the GA toolkit.
Science at Scale: SciDAC Astrophysics Code Scales to Over 200K Processors
Performing high-resolution, high-fidelity, three-dimensional simulations of Type Ia supernovae, the largest thermonuclear explosions in the universe, requires not only algorithms that accurately represent the correct physics, but also codes that effectively harness the resources of the next generation of the most powerful supercomputers. Through DOE’s Scientific Discovery through Advanced Computing (SciDAC), Lawrence Berkeley National Laboratory’s (LBNL’s) Center for Computational Sciences and Engineering (CCSE) has developed two codes that can do just that.
MAESTRO, a low Mach number code for studying the pre-ignition phase of Type Ia supernovae, as well as other stellar convective phenomena, has just been demonstrated to scale to almost 100,000 processors on the Cray XT5 supercomputer “Jaguar” at the Oak Ridge Leadership Computing Facility. And CASTRO, a general compressible astrophysics radiation/ hydrodynamics code which handles the explosion itself, now scales to over 200,000 processors on Jaguar — almost the entire machine. Both scaling studies simulated a pre-explosion white dwarf with a realistic stellar equation of state and self-gravity. The research team includes Ann Almgren, John Bell, Michael Lijewski and Andy Nonaka of LBNL and Michael Zingale and Chris Malone of Stony Brook University.
Numerical Studies of Superfluid Quantum Systems
A research collaboration between Pacific Northwest National Laboratory, the University of Washington, the Warsaw Institute of Technology in Poland, and the Wuhan Branch of the Chinese Academy of Sciences has resulted in a suite of stable application software to numerically study a broad class of many-body non-equilibrium quantum dynamics problems applying an extension of the time-dependent density functional theory to superfluid fermionic systems. Results of vortex formation in sparse fermi gas systems in traps and a suite of studies demonstrating large amplitude collective motion in nuclei were recently presented at the UNEDF meeting in East Lansing, Michigan. The numerical experiments were conducted on JaguarPF, the Cray XT5 machine at the Oak Ridge Leadership Computing Facility. The research team demonstrated the software capability at over 97 percent of the full machine scale. The work is funded by DOE SC ASCR, DOE SC NP, and DOE NNSA.
ADIC2 — a Flexible, Extensible Tool for Differentiating C and C++ Code
Two researchers at Argonne National Laboratory have developed a new software tool, called ADIC2, for the automatic differentiation of C and C++ code through source-to-source transformation. The work is part of an ongoing project by Sri Hari Krishna Narayanan, a postdoctoral fellow, and Boyana Norris, a computer scientist, both members of Argonne’s Mathematics and Computer Science Division.
Automatic differentiation (AD) is a process for producing derivative computations from computer programs. The resulting derivatives are accurate to machine precision with respect to the original computation and can be used in many contexts, including uncertainty quantification, numerical optimization, nonlinear partial differential equation solvers, and the solution of inverse problems using least squares.
ADIC2 builds on several ideas of its predecessor, ADIC, including its use of multiple independent differentiation modules. ADIC2, however, addresses many of the limitations of the earlier tool, which has been in use for over a decade for differentiating codes written in the C programming language. Far more than a simple extension of ADIC, the ADIC2 tool is a completely new implementation that leverages the ROSE compiler infrastructure developed at Lawrence Livermore National Laboratory. ROSE relies on a commercial front-end, which ensures that all language features are parsed correctly. ADIC2 also includes a new interface to the OpenAD differentiation modules being developed by the DOE SciDAC Institute for Combinatorial Scientific Computing and Petascale Simulation.
NERSC and HDF Group Optimize HDF5 Library to Improve I/O Performance
There are several layers of software that deal with input/output (I/O) on high performance computing (HPC) systems. Getting these layers of software to work together efficiently can have a big impact on a scientific code’s performance. That’s why the U.S. Department of Energy’s National Energy Research Scientific Computing Center (NERSC) has partnered with the nonprofit Hierarchical Data Format (HDF) Group to optimize the performance of the HDF5 I/O library on modern HPC platforms.
Because parallel performance of HDF5 had been trailing on newer HPC platforms, especially those using the Lustre filesystem, NERSC has worked with the HDF Group to identify and fix performance bottlenecks that affect key codes in the DOE workload, and to incorporate those optimizations into the mainstream HDF5 code release so that the broader scientific and academic community can benefit from the work.
LANL Team Wins Outstanding Paper Award at Computational Science Conference
Los Alamos National Laboratory (LANL) scientists Konstantin Lipnikov, David Moulton and Daniil Svyatskiy, all from the Theoretical Division, received the Outstanding Paper Award at the 10th International Conference on Computational Science (ICCS) in Amsterdam, The Netherlands, on June 1. The award was given for the paper entitled “A multiscale multilevel mimetic (M3) method for well-driven flows in porous media,” which was presented by Lipnikov. The development of the method was supported by two ASCR projects, Mimetic Methods for PDEs (PI: M. Shashkov) and Predictability with Stochastic PDEs (PI: D. Moulton).
ICCS 2010 brought together about 400 researchers and scientists from mathematics and computer science representing various application areas, as well software developers and vendors, to discuss problems and solutions in the areas, to identify new issues, and to shape future directions for research, as well as to help industrial users apply various advanced computational techniques.
NERSC Authors Win Best Paper Award at Cray User Group Meeting
The Best Paper award at CUG2010
, the Cray User Group meeting held May 24–27 at the University of Edinburgh, Scotland, went to “Application Acceleration on Current and Future Cray Platforms” written by Alice Koniges, Robert Preissl, and Jihan Kim of NERSC; David Eder, Aaron Fisher, Nathan Masters, and Velimir Mlaker of Lawrence Livermore National Laboratory; Stephane Ethier and Weixing Wang of Princeton Plasma Physics Laboratory; Martin Head-Gordon of UC Berkeley and LBNL’s Chemical Sciences Division; and Nathan Wichmann of Cray Inc. This paper examines three different applications and means for improving their performance with a particular emphasis on methods that are applicable for many/multicore and future architectural designs.
Berkeley Lab Team Wins Best Paper Award at Cloud Computing Conference
A team of researchers from Lawrence Berkeley National Laboratory has received the Best Paper Award at ScienceCloud 2010, the 1st Workshop on Scientific Cloud Computing
sponsored by the Association for Computing Machinery. The paper, “Seeking Supernovae in the Clouds: A Performance Study,” was written by Keith Jackson and Lavanya Ramakrishnan of the Advanced Computing for Science Department (ACS), Karl Runge of the Physics Division and Rollin Thomas of the Computational Cosmology Center (C3).
PNNL Researcher Receives ASCR Leadership Computing Challenge Award
Guang Lin, a computational mathematics researcher in the Fundamental and Computational Sciences Directorate at Pacific Northwest National Laboratory, has been selected to receive a 2010 ASCR Leadership Computing Challenge (ALCC) award.
The ALCC program allocates up to 30 percent of the computational resources at NERSC and the Leadership Computing Facilities at Argonne and Oak Ridge for special situations of interest to the Department with an emphasis on high-risk, high-payoff simulations in areas directly related to the Department’s energy mission in areas such as advancing the clean energy agenda and understanding the Earth’s climate, for national emergencies, or for broadening the community of researchers capable of using leadership computing resources.
Lin’s proposal “Stochastic Nonlinear Data-Reduction Methods with Detection and Prediction of Critical Rare Events” will be allocated 5 million processor hours over the next year on the Jaguar supercomputer located at Oak Ridge National Laboratory. The project focuses on extracting and reducing data from massive volumes of information to quantify and reduce the uncertainty in the climate models. If successful, this research project will have a revolutionary impact on how scientists analyze petascale, noisy, incomplete data in complex systems and ultimately lead to better future prediction and decision-making.
ANL, PNNL Scientists Selected for U.S. Frontiers of Engineering Symposium
Ravi Madduri, a principal software development specialist in Argonne’s Mathematics and Computer Science Division, and Terence Critchlow, a chief scientist in the Computational Sciences and Mathematics Division at Pacific Northwest National Laboratory, have been selected to participate in the National Academy of Engineering’s 16th Annual U.S. Frontiers of Engineering symposium. The event brings together 100 of the country’s outstanding young engineers from industry, academia, and government to discuss pioneering technical and leading-edge research in various engineering fields and industry sectors. Participation is by invitation, following a competitive nomination and selection process. Madduri and Critchlow are two of 87 engineers selected from among 265 applicants.
Madduri has made major contributions in distributed (“Grid”) computing. He has developed infrastructure that has been integrated into the caBIG services used by major cancer research centers nationwide and has been incorporated into the Grid Service Authoring Tool under the DOE SciDAC Center for Enabling Distributed Petascale Science.
Critchlow’s research interests are focused in the areas of large-scale data management, data analysis, data integration, metadata, data dissemination and scientific workflows. In particular, current projects for both science and security customers include research in multi-dimensional data dissemination, online analytical processing (OLAP), link analysis, and defining workflow context within the Kepler workflow engine.
The symposium will be held Sept. 23-25 at the IBM Learning Center in Armonk, N.Y., and will examine cloud computing, autonomous aerospace systems, engineering and music, and engineering inspired by biology.
Berkeley Lab Team Receives NASA Award for Computing Infrastructure Development
Julian Borrill, Christopher Cantalupo and Theodore Kisner of the Computational Cosmology Center (C3) were honored with a NASA Public Service Group Award for developing the supercomputing infrastructure for the U.S. Planck Team’s data and analysis operations at NERSC. The team was also named on a second award to the U.S. Planck Analysis team for partnering with European colleagues to conceive and implement an overall data analysis strategy for the mission.
ORNL Staffers Recognized at ISC’10 for Work on InfiniBand Software
On June 3, Richard Graham and Stephen Poole of ORNL’s Computer Science and Mathematics Division and the Oak Ridge Leadership Computing Facility jointly accepted an HPC Advisory Council award with Mellanox Technologies at the 2010 International Supercomputing Conference (ISC’10) in Hamburg, Germany. The award was presented for their collaborative work on software technology for Mellanox’s InfiniBand network, a switched fabric communications link between processor nodes and input/output nodes in high-performance computers.
We’ve been working with Mellanox on the technical side for about two years to help them understand our application requirements to define a new network product,” said Graham, Applications and Performance Tools Group leader at ORNL. Graham and Poole assisted Mellanox with software requirements that increase offload capabilities of the InfiniBand network, increasing application performance through faster, more balanced communication between processing units in high-performance systems. “This award recognizes the fact that what we’ve done over the last couple of years has a major impact on the high-performance computing industry,” said Graham.
Sandia’s Cynthia Phillips Co-Chairs Three Major Conferences within Six Months
Cynthia Phillips has (co)chaired three major parallel computing conferences in 2010. She was co-chair of the SIAM Conference on Parallel Processing for Scientific Computing (February 2010), program committee chair of the IEEE International Parallel and Distributed Processing Symposium (April 2010), and program chair for the ACM Symposium on Parallelism in Algorithms and Architectures (June 2010).
Berkeley Lab’s Shalf and Strohmaier Make Key Contributions to ISC’10
John Shalf of NERSC and Erich Strohmaier of LBNL’s Computational Research Division chaired sessions at the International Supercomputing Conference (ISC)
, held May 30 to June 3 in Hamburg, Germany. Shalf chaired a session on “HPC: Future Technology Building Blocks,” and Strohmaier chaired a session on “Focusing LINPACK: The TOP500 Yardstick” and co-chaired a “Hot Seat Session.” Shalf and Strohmaier also co-organized a Birds-of-a-Feather session on HPC energy efficiency which drew an overflow crowd of more than 120 participants.
Strohmaier and LBNL Associate Lab Director Horst Simon, along with Hans Meuer of the University of Mannheim and Jack Dongarra of the University of Tennessee, are co-authors of the TOP500 List
, the latest edition of which was officially released at ISC’10. Although Jaguar retained the top position, the biggest news is that there are now two Chinese systems in the TOP10. Argonne’s Intrepid is No. 9 and NERSC’s Franklin system is now ranked No. 17.
ASCR Researcher Tamara Kolda Gives Keynote Lecture
Tamara G. Kolda (Sandia) delivered a keynote address at the BIT 50: Trends in Numerical Computing Conference in Lund, Sweden, June 17-20, 2010. The conference celebrated the 50th anniversary of the journal BIT, which (translated and backwards) is short for the Journal for Information Processing. The goal of the meeting was to look forward to new trends in scientific computing, and Kolda was invited to speak on applications in data mining. She discussed methods for link prediction using matrix and tensor methods.
OLCF’s Jaguar Retains Top Spot on TOP500 List
When the latest edition of the twice-yearly TOP500 List
of the world’s most powerful supercomputers was released June 1 at the 2010 International Supercomputing conference, the Oak Ridge Leadership Computing Facility’s Cray XT5 “Jaguar” was again named fastest supercomputer in the world with a performance of 1,759 petaflops, or over 1.7 thousand trillion calculations per second.
2 Billion Hours Served at the Argonne Leadership Computing Facility
The Argonne Leadership Computing Facility (ALCF) recently served up its two-billionth hour of science on Intrepid, the facility’s IBM Blue Gene/P. During an annual scaling workshop at the ALCF in May, researchers from SUNY at Stony Brook
University’s Office of Applied Mathematics and Statistics were conducting Rayleigh-Taylor simulations of turbulent mixing when word came that the impressive milestone was reached.
“It’s an exciting occasion to celebrate the amount of research the ALCF facilitates,” said Director Pete Beckman. The ALCF hosted a party on June 16 to mark the two-billionth hour achievement.
ORNL Collaborates with Allinea to Add Muscle to Debugger
A collaboration between ORNL and software toolmaker Allinea Software has produced a formidable weapon in the fight against application bugs. When it is released this summer, Allinea DDT (Distributed Debugging Tool) will allow programmers to step through applications running 220,000 simultaneous processes and address problems as they arise. Allinea representatives are working with ORNL’s Application Performance Tools Group to extend Allinea DDT to a scale 40-plus times greater than previous high-end debugging tools—in other words, to the scale applications reach when they run on the world’s most powerful supercomputers.
“Before we joined into this project, tools weren’t capable of getting anywhere near the size of the hardware,” noted Allinea’s David Maples. “The problem was that a debugging tool might do 5,000 or 10,000 parallel tasks if it was lucky, when the machines and applications wanted to write things that could do 200,000-plus. So the tools just got beat up by the hardware.” The collaborators worked with ORNL’s Cray XT Jaguar system, which has more than 220,000 processors and is currently ranked as the world’s most powerful system. “This project means application developers have a chance to debug their code in a reasonable amount of time at scale,” said Application Performance Tools Group leader Richard Graham. “They won’t have to write special case code to debug things, and go through the process of debugging the debug code.
ESnet Launches “Network Matters” Blog
ESnet, DOE’s Energy Sciences Network, has started a new blog called “Network Matters
”. The blog will keep the DOE community informed about ESnet projects in progress and the science ESnet makes possible. Recent topics include progress towards the 100G prototype network and data distribution for the Large Hadron Collider. ESnet is a high-speed network serving thousands of DOE scientists at over 40 institutions, as well as connecting to more than 100 other networks, enabling them to collaborate on some of the world’s most important scientific research challenges. ESnet is managed and operated by the ESnet team at Lawrence Berkeley National Laboratory.
OUTREACH & EDUCATION:
ORNL Supercomputing Crash Course Gets the Next Generation Up to Speed
Attendance quadrupled over last year’s at a crash course in supercomputing offered by the Oak Ridge Leadership Computing Facility (OLCF). ORNL computational scientists have taught the popular course five times in as many years. This year Arnold Tharrington taught an introductory class on June 17, and Rebecca Hartman-Baker taught an advanced class on June 18. The 120 attendees of the free course included undergraduate and graduate students at ORNL for summer internships, postdoctoral researchers, and ORNL staff members. Plans are in the works for another crash course in the fall geared toward ORNL staff members.
“This training is definitely atypical of what I would receive at a university,” said Ann Wells, a summer intern through the Research Alliance in Math and Science (RAMS) program, who in the fall will start a doctoral program at the University of Tennessee (UT). “Going to UT and having ORNL next door have given me a lot of opportunities to utilize resources that aren’t available at a university. The supercomputing crash course has been a perfect way to learn how to use the supercomputers. I have no experience using the supercomputers, and I have been able to follow along and understand how to manipulate basic data.”
LBNL Hosts 100 International Attendees at VECPAR’10 Conference
Four Computing Magazines Interview LBNL’s Kathy Yelick and John Shalf
EnterTheGrid/Primeur Magazine, a European online magazine for high performance computing and networking, interviewed John Shalf, head of NERSC’s Advanced Technologies Group, at ISC’10 in Hamburg. The resulting article, titled “It takes three to tango in exascale computing: memory, photonic interconnects and embedded processors,”
discusses the Green Flash project and the future of supercomputing from the hardware side. On the software side, Shalf discussed native parallel programming languages with International Science Grid This Week in “Q & A — John Shalf talks parallel programming languages.”
According to editor Miriam Boon, the Shalf interview was the most-read article ever for the newsletter, recording more than 4,000 unique page views.
Oakland Tech Students Tour NERSC Machine Room, Get Souvenirs
As part of the Lab’s new outreach initiative, NERSC has started a partnership with Oakland Technical High School’s Computer Science and Technology Academy, a small academy within the larger high school. On Thursday afternoon, June 3, 12 students from Oakland Tech and their teacher, Emmanuel Onyeador, visited the Oakland Scientific Facility for an introduction to computational science and supercomputer architecture, and a tour of the NERSC machine room. The next week, a second group from the school paid a similar visit.
NERSC and the Oakland Tech Computer Science and Technology Academy plan to do more outreach programs this summer and throughout the school year. NERSC’s Katie Antypas and Jon Bashor have joined the school’s computer science advisory committee, which will hold its next meeting at NERSC.