2009

January

ASCR Monthly Computing News Report - January 2009



In this issue...
 
 
 
 

RESEARCH NEWS:

ORNL Team Simulates Core-Collapse Supernovae in 3-D

A team led by Oak Ridge National Laboratory (ORNL) astrophysicist Anthony Mezzacappa is using ORNL's petascale Jaguar supercomputer to run the first-ever three-dimensional core-collapse supernova simulations that can be considered realistic. The team is conducting the first in a series of supernova simulations, starting with a star about 15 times the mass of the sun. Each simulation takes about three months, with the outcome being an immensely detailed look at just under a second at the beginning of a supernova. The project will include stars in the range of "typical" core-collapse supernovas, with masses from 10 to 25 times that of the sun.

The simulations include nearly all the factors likely to be important to a core-collapse supernova explosion, including the behavior of neutrinos, tiny particles that are nearly undetectable on earth but may play a major role in blowing massive stars into space. The team recreates the supernova using a three-part software application known as Chimera, named after the three-sectioned monster of Greek mythology. For this application, the three components are an astrophysical hydrodynamics code, a neutrino radiation transport code, and a nuclear kinetics code. The increased complexity and realism of the simulations were made possible by recent upgrades to Jaguar, which now has a peak speed of 1.64 quadrillion calculations each second, or 1.64 petaflops.

 
Lattice QCD Research Significantly Accelerated by ALCF's Blue Gene/P

Since the installation of the 40-rack Blue Gene/P system at the Argonne Leadership Computing Facility (ALCF) in 2008, high energy and nuclear physicists have used over 300 million core-hours for the study of lattice quantum chromodynamics (QCD). They are carrying out simulations aimed at deepening scientists' understanding of the interactions of quarks and gluons, the fundamental constituents of the bulk of the observed matter in the universe. Their work provides critical results needed to interpret major experiments in high energy and nuclear physics.

Two formulations of lattice quarks are being used, each of which has its own advantages. Together, they provide important crosschecks on some critical calculations. Roughly 40 percent of the time has been spent on simulations using the less-expensive improved staggered quarks. These configurations include both the smallest lattice spacing and lightest up and down quarks ever used in simulations with staggered quarks. They are being used to calculate the decay constants of pi and K mesons, the weak transition coupling (CKM matrix element) between the up and strange quarks, and the masses of the lightest, strongly interacting particles. This work will greatly improve the accuracy of the determination of all these quantities and a wide range of other quantities of importance in high energy and nuclear physics.

The balance of time has been spent on simulations with the more challenging domain wall quarks. A new set of configurations being generated largely at ALCF will allow more accurate continuum extrapolations of important quantities, such as QCD low-energy constants, weak matrix elements, and nucleon form factors, by providing measurements with a second lattice spacing, which was not available before. They have been used to provide preliminary results on neutral kaon mixing, which is required for an important test of CP (charge and parity) symmetry violation predicted by the Standard Model. The measurement with a smaller lattice spacing will make it possible to decrease the systematic error by a factor of 2. The small lattice spacing is essential to reduce systematic error and enable accurate calculations at higher momentum transfer.

Contact: James Osborn, osborn@alcf.anl.gov
 
INCITE Researchers Track Chlorofluorocarbons in a Global Eddying Ocean Model

Using Jaguar, the Cray XT system at the Oak Ridge Leadership Computing Facility (OLCF), Synte Peacock and Frank Bryan at the National Center for Atmospheric Research (NCAR) and Mathew Maltrud at Los Alamos National Laboratory (LANL) have for the first time carried out a global eddying ocean simulation that has run a 100-year model. The model carried not only chlorofluorocarbons (CFCs), but also a host of other tracers that yielded valuable information about ocean ventilation pathways and timescales. To date, this team has been able to refine and successfully reassess earlier estimates linking changes in pollutant concentrations to climate change. The NCAR/LANL model is one of the most realistic global eddying models ever run, Maltrud said, and the only one to simulate such a large set of tracer distributions.

Researchers today increasingly recognize the important, but largely unknown, influence of oceanic activity on climate change. They want to know how the ocean is coping with vast deposits of chemical pollutants, how it moves them about, stores them over long periods of time, and ultimately exchanges them at the surface with the air. Using the powerful computers now available, they are building simulations to assess with greater precision the long-term effect of this oceanic housekeeping on global climate. Because of the limits of computational power, most previous studies of CFC distributions using ocean models have been done using fairly coarse resolutions (grid spacing greater than 100 kilometers), for which some important transport activities are either poorly resolved or poorly estimated. To begin to resolve features such as narrow currents and mesoscale eddies (circular loop-like features with a diameter of less than 200 kilometers), researchers need a model with a finer grid resolution-one of kilometers to tens of kilometers.

 
LANL Mimetic Finite Difference Method for Stokes Flow on Polygonal Meshes

Stokes flow is fluid flow where advective inertial forces are negligibly small compared to viscous forces. This is a typical situation on a microscale or when the fluid velocity is very small. Stokes flow is a good and important approximation for a number of physical problems such as sedimentation, modeling of bio-suspensions, construction of efficient fibrous filters, developing energy-efficient micro-fluidic devices (e.g., mixers), etc. Efficient numerical solution of Stokes flow requires unstructured meshes adapted to geometry and solution as well as accurate and stable discretization methods capable of treating such meshes. A research team led by Konstantin Lipnikov of LANL developed a new mimetic finite difference (MFD) method that remains accurate and stable on general polygonal meshes that may include non-convex and degenerate elements.

The developed MFD method has a number of similarities with a low-order finite element (FE) method. Both methods try to preserve fundamental properties of physical and mathematical models. Various approaches to extend the FE method to non-simplicial elements have been developed over the last decade. Construction of basis functions for such elements is a challenging task and may require extensive analysis of geometry. Contrary to the FE method, the MFD method uses only boundary representation of discrete unknowns to build stiffness and mass matrices. Since no extension inside the mesh element is required, practical implementation of the MFD method is simple for general polygonal meshes. Co-authors of this result are K. Lipnikov (LANL), V. Gyrya (Penn State University), G. Manzini (IMATI, Italy) and L. Beirao da Veiga (UNIMI, Italy). This work was done as part of ASCR Applied Mathematics Research Project "Mimetic Finite Difference Methods for Partial Differential Equations."

Contact: Konstantin Lipnikov, lipnikov@lanl.gov or
Mikhail Shashkov, shashkov@lanl.gov
 
LBNL, ANL and ORNL Staff to Present at IPDPS 2009 Symposium

Staff from Lawrence Berkeley National Laboratory's (LBNL's) Computational Research and NERSC divisions will give five presentations during the IEEE International Parallel & Distributed Processing Symposium to be held May 25-29 in Rome, Italy. Joining them will be staff from Argonne and Oak Ridge, each contributing to two presentations. IPDPS is an international forum for engineers and scientists from around the world to present their latest research findings in all aspects of parallel computation. In addition to technical sessions of submitted paper presentations, the meeting offers workshops, tutorials, and commercial presentations and exhibits.

"This is one of the most competitive IEEE conferences, and the fact that we have so many papers speaks for the quality of our computer science program," said Horst Simon, applications chair for the conference and Associate Lab Director for Computing Sciences at LBNL.

LBNL contributors are:
  • Lenny Oliker of the Future Technologies Group, is one of three invited keynote speakers. Oliker will give his talk "Green Flash: Designing an energy efficient climate supercomputer" on Thursday, May 28.
  • Ekow Otoo, Doron Rotem and Shih-Chiang Tsao of the Scientific Data Management Group will present their paper on "Analysis of Trade-Off between Power Saving and Response Time in Disk Storage Systems" as part of the Fifth Workshop on High-Performance, Power-Aware Computing on Monday, May 25.
  • Kamesh Madduri of LBNL's Scientific Data Management Group, along with David A. Bader of the Georgia Institute of Technology, will present their paper on "Compact Graph Representations and Parallel Connectivity Algorithms for Massive Dynamic Network Analysis" as part of the Graph and String Applications session on Tuesday, May 26.
  • Brian Van Straalen, Terry Ligocki, and Noel Keen of the Applied Numerical Algorithms Group, John Shalf of NERSC, and Woo-Sun Yang of Cray Inc. will present "Scaling Challenges for Massively Parallel AMR Applications" during the Scientific Applications Session on Thursday, May 28.
  • Rajesh Nishtala, Paul Hargrove, and Dan Bonachea of the Future Technologies Group and NERSC Director Katherine Yelick will present "Scaling Communication Intensive Applications on BlueGene/P Using One-Sided Communication and Overlap" during the Communications Systems Session on Thursday, May 28.
Argonne National Laboratory (ANL) contributors are:
  • Boyana Norris of ANL, with Albert Hartono (lead author) and Ponnuswamy Sadayappan, both of Ohio State University, will present "Annotation-Based Empirical Performance Tuning Using Orio" as part of the System Software and Applications Session on Tuesday, May 26.
  • Philip Carns, Sam Lang and Robert Ross, all of Argonne, along with Murali Vilayannur of Vmware Inc. and Julian Kunkel and Thomas Ludwig of the University of Heidelberg will present "Small File Access in Parallel File Systems" as part of the I/O and File Systems Session on Tuesday, May 26.
Oak Ridge National Laboratory (ORNL) contributors are:
  • Scott Klasky of ORNL, along with Jay Lofstead (lead author), Fang Zheng and Karsten Schwan, all of Georgia Tech, will present "Adaptable, Metadata Rich IO Methods for Portable High Performance IO."
  • Weikuan Yu and Jeffrey Vetter, both of Oak Ridge, and Oleg Drokin of Sun Microsystems Inc. will present "Design, Implementation, and Evaluation of Transparent pNFS on Lustre."
    Both ORNL talks will be part of the I/O and File Systems Session on Tuesday, May 26.
For additional Information about IPDPS, go to: http://www.ipdps.orgExternal link
 
More Accurate Predictions of Charge States in Solid State and Extended Systems

A computational technique developed by researchers from the Pacific Northwest National Laboratory (PNNL), the University of California, San Diego, and the University of Iceland is providing more accurate prediction of charge states within solid state and extended systems important to DOE research in solar materials, hydrogen fuel cells, as well as in-situ remediation. The research team has developed a parallel algorithm technique for implementing a hybrid-DFT approach in the PNNL-developed NWChem program. DFT (density functional theory) is a quantum mechanical theory used to investigate the electronic structure of atoms, molecules, and the condensed phases. The technique is designed for use in plane-wave DFT programs, making it applicable to both confined and extended systems, as well as to ab initio molecular dynamics simulations.

The overall performance of the hybrid-DFT calculations was found to be quite reasonable, even for small system sizes. The time per step and the overall parallel efficiency of the calculations describing an 80-atom supercell of hematite was found to be 21 seconds and 88 percent efficiency for 1024 processors, decreasing to 12 seconds and 76 percent efficiency for 2048 processors. This method has been applied successfully to several systems for which conventional DFT methods do not work well, including more accurate predictions for band gaps in oxides and the electronic structure of a charge trapped states in silica, TiO2 surfaces, and iron containing micas.

Contact Eric Bylaska, eric.bylaska@pnl.gov
 
David Bailey's Research Featured in Spektrum der Wissenschaft

The work of David H. Bailey, Chief Technologist of LBNL's Computational Research Division, was recently featured in Spektrum der Wissenschaft (http://www.spektrumverlag.de), the German equivalent of Scientific American. In the January 2009 issue, an article entitled "Der Computer als Formelentdecker" ("The Computer as Formula Discoverer") by Spektrum editor Christoph Pöppe summarizes recent developments in experimental mathematics. The abstract of the article reads (translated): "A computer program can discover, through targeted searches, what a numerically calculated value 'actually' is. But mathematicians are not unemployed; on the contrary, by using the program, numerous relationships can be found that need proving." A translation of the paper will appear soon on the website: Papers on Experimental MathematicsExternal link.

 
New SciDAC-Supported Solver Runs Well on NASA's Pleiades Supercomputer

The SciDAC Science Application team "Simulations of Turbulent Flows with Strong Shocks and Density Variations" (PI Sanjiva Lele, Stanford University, see http://shocks.stanford.eduExternal link) has been developing a high order accurate finite difference solver for compressible fluids and magnetohydrodynamic flows, called ADPDIS3D. The team recently added multi-material capabilities to ADPDIS3D and used the code to simulate a Richtmyer-Meshkov instability and a hypersonic reentry problem on the Pleiades supercomputer at NASA Ames Research Center (6,400 nodes, 51,200 CPUs). The computations performed using ADPDIS3D code on Plieades showed more than an 80 percent speed-up when increasing the number of CPUs from 64 to 15,625.

Contact Eric Bylaska, eric.bylaska@pnl.gov
 

PEOPLE:

Sandia Researcher Bruce Hendrickson Elected to SIAM Council

Bruce Hendrickson, Senior Manager for Computer Science and Mathematics at Sandia National Laboratories, was elected to the Council of the Society for Industrial and Applied Mathematics. SIAM is the leading international professional society for applied mathematics, and the SIAM Council determines policy and priorities for the Society.

 
LLNL's Carol Woodward Elected as a SIAM CSE Officer

Carol Woodward, computational scientist in the Center for Applied Scientific Computing (CASC) at Lawrence Livermore National Laboratory (LLNL), was elected in early January to serve as the Secretary for the Society for Industrial and Applied Mathematics (SIAM) activity group on Computational Science and Engineering. SIAM is one of the premier professional societies, whose mission is "to ensure the strongest interactions between mathematics and other scientific and technological communities through membership activities, publication of journals and books, and conferences". Woodward leads the Nonlinear Solvers and Differential Equations (NSDE) Project within CASC. Her research interests include numerical methods for nonlinear partial differential equations, nonlinear and linear solvers, verification of scientific codes, and parallel computing.

 
Sandia's Pavel Bochev Promoted

Sandia National Laboratories researcher Pavel Bochev has been recently promoted to a Distinguished Member of the Technical Staff (DMTS). Bochev is a computational mathematician at Sandia. His research interests include numerical analysis, applied mathematics and computational science. The ASCR-funded research of Bochev in the area of compatible discretizations and its impact on Sandia's mission played a large role in his promotion.

 
ESnet's Bill Johnston Gives Network Overview to Government IT Experts

Bill Johnston, former department head of the Energy Sciences Network (ESnet), spoke at a live case study hosted by Juniper Networks in Reston, Virginia. In this invitation-only technical seminar for government and commercial Juniper customers, Johnston described the hardware used to develop the architecture and implementation of ESnet 4, the organization's next-generation network. This new network is capable of accommodating the massive data flows required for scientific collaborations.

 

FACILITIES/INFRASTRUCTURE:

ORNL's Petaflop Jaguar Completes Testing, Early Applications Start

The petascale upgrade to Oak Ridge National Laboratory's (ORNL's) Jaguar supercomputer, the flagship system of the DOE's Oak Ridge Leadership Computing Facility (OLCF), has completed acceptance testing, verifying the performance, functionality and stability of the most powerful system ever built for open scientific research. Conducted by staff from ORNL and the system's manufacturer, Cray Inc., the testing process concluded with a weeklong marathon in which the system demonstrated its stability by running a suite of applications from the climate, fusion, materials research, combustion science, chemistry, and astrophysics communities.

Jaguar's latest upgrade brought the system to a peak performance of 1.64 thousand trillion calculations a second-or 1.64 petaflops-making it the first petaflop system dedicated to open scientific research and the second such system ever built. Jaguar uses more than 45,000 quad-core AMD Opteron processors. In addition, it has 362 terabytes of memory (more than three times that of any other system in existence), a 10-petabyte file system, 578 terabytes per second of memory bandwidth and an unprecedented input/output bandwidth of 284 gigabytes per second.

To facilitate the most able codes and to test the mettle of the new system, the OLCF has granted early access to a number of projects that can utilize a majority of the machine and take it, and science, to their respective limits.

"The current plan is for the system to be used during the next several months for specific high-impact projects of national importance," Director of Science Doug Kothe said in a recent interview featured on HPCwire. "We have three principal goals during the system's early phase: deliver important, high-impact science results and advancements; harden the system for production; and embrace a broad user community capable of and prepared for using the system". This priority "Petascale Early Science" period will run approximately six months and consist initially of 20 projects, said Kothe, adding that the first phase will begin in mid-January and run through mid-March. Phase one features a broad range of science, including two climate projects and one project each in the domains of chemistry, biology, combustion and materials science. Fusion, nuclear energy, astrophysics, and geosciences will also be explored in the following two phases, planned to end in mid-July.

OUTREACH:

Final ASCR/ASC Risk Management Workshop Report Released

The final workshop report from the Risk Management Techniques and Practice Workshop sponsored by the SC/ASCR and NNSA/ASC programs was released in December. The purpose of the workshop was to assess current and emerging techniques, practices, and lessons learned for effectively identifying, understanding, managing, and mitigating the risks associated with acquiring leading-edge computing systems at high-performance computing centers (HPCCs). The workshop was hosted at LLNL (Terri Quinn and Mary Zosel) and drew participants from 15 high-performance computing (HPC) organizations, four HPC vendor partners, and three government agencies.

The primary workshop findings include the following:
  • Standard risk management techniques and tools are in the aggregate applicable to projects at HPCCs and are commonly employed by the HPC community.
  • HPC projects have characteristics that necessitate a tailoring of the standard risk management practices.
  • All HPCC acquisition projects can benefit by employing risk management, but the specific choice of risk management processes and tools is less important to the success of the project.
  • The special relationship between the HPCCs and HPC vendors must be reflected in the risk management strategy.
  • Best practices findings include developing a prioritized risk register with special attention to the top risks, establishing a practice of regular meetings and status updates with the platform partner, supporting regular and open reviews that engage the interests and expertise of a wide range of staff and stakeholders, and documenting and sharing the acquisition/build/deployment experience.
  • Top risk categories include system scaling issues, request for proposal/contract and acceptance testing, and vendor technical or business problems.
Read the report: https://rmtap.llnl.gov/report.phpExternal link (LLNL-TR-409240)
Workshop web site: https://rmtap.llnl.gov/External link
 
LBNL and NERSC to Collaborate with Korean Institute

Berkeley Lab, including NERSC, has signed a Memorandum of Understanding to collaborate with the Korea Institute of Science and Technology Information (KISTI), including its Supercomputing Center, e-Science Division and Computing and Networking Resources Division. Specific intentions include:

  • Pursue collaboration on the optimization of operations and performance of high performance computing and networking (HPCN) facilities, including KREONET/GLORIAD and large scale data storage.
  • Pursue collaboration on the development of the HPCN, e-Science and grid computing software (both infrastructure and applications related), the development of computational science and engineering applications, and the sharing of expertise in the optimization of applications performance on HPCN systems.
  • Provide mutual access to facilities for the purposes of evaluating systems and applications performance.
  • Encourage collaboration and cooperation of projects involving scientists, engineers and personnel from the user communities associated with each organization.
  • Offer an employee exchange opportunity with the aim of sharing and furthering the scientific and technical know-how of both institutions.
 
Extreme Scale Computing Workshops Focus on Climate, HEP

About 100 participants attended each of the first two workshops in the Extreme Scale Computing Workshop Series (http://extremecomputing.labworks.org), a series of eight ASCR funded workshops led by Pacific Northwest National Laboratory. Each of the two workshops attracted nearly 100 invited participants, including a good balance between Lab and non-Lab participants and international participation.

The Climate Change Science workshop, chaired by Warren Washington (NCAR), focused on model development and integrated assessment; algorithms and computational environment; decadal predictability and prediction; and extreme scale data management, analysis, visualization, and productivity. A letter report has been sent to DOE, and a final workshop report is being drafted.

The High Energy Physics (HEP) workshop, chaired by Roger Blandford (SLAC), focused on astrophysics data; cosmology and astrophysics simulations; experimental particle physics; accelerator simulation; and high energy theoretical physics. A letter report is currently being prepared for DOE.

Moe Khaleel, (moe.khaleel@pnl.gov) or
T.P. Straatsma, (tp.straatsma@pnl.gov)
 
ALCF "Getting Started" Workshop Set for February

The Argonne Leadership Computing Facility (ALCF) is hosting an INCITE Getting Started Workshop on February 10-11 at Argonne National Laboratory. The workshop will provide researchers who are conducting both new and renewed INCITE projects with information on ALCF services and resources, technical details on the IBM Blue Gene/P architecture and hands-on assistance in porting and tuning their applications on the Blue Gene/P computer. In addition, a special session on Eureka will be held. Eureka provides visualization and data analytics to transform data from the Blue Gene/P into useful knowledge. It is the world's largest installation of NVIDIA Quadro Plex S4 external graphics processing units (GPUs) at this time. Eureka offers more than 111 TF of computer power and more than 3.2 TB of RAM (5% of Intrepid RAM). The workshop is an excellent opportunity to maximize 2009 INCITE awards and become familiar with ALCF staff and resources.

Chel Lancaster, lancastr@alcf.anl.gov
 
ORNL Announces Upcoming XT Workshop

The Leadership Computing Facility and NICS, both located at Oak Ridge National Laboratory (ORNL), will sponsor a four-day workshop (April 13-16) covering the important issues in obtaining increased performance from Cray XT systems. Among the topics to be covered are XT5 architecture, XT5 NUMA issues, and effective programming for the XT5. This workshop will feature lectures from the OLCF and Cray staff as well as hands-on sessions. The registration site, with complete agenda can be found at:

 
LBNL Hosts Researchers from Karlsruhe Institute of Technology

Leading researchers from Karlsruhe Institute of Technology (KIT) in Germany visited Berkeley Lab Computing Sciences on Wednesday, January 28, as part of a week-long visit to California to strengthen existing partnerships and establish new research collaborations. The Berkeley Lab visit included an exchange of information on research at both institutions, along with discussions of challenges of large-scale data management and analysis, and energy efficiency problems in future HPC environments. The KIT researchers' itinerary also included stops at UC Berkeley, CITRIS (the Center for Information Technology Research in the Interest of Society), Hewlett Packard, IBM, Google, and Stanford University.

 
Mimetic Discretization Methods Are Getting Popular in Europe

Konstantin Lipnikov of Los Alamos National Laboratory gave invited presentations at mathematical seminars in the Institute of Applied Mathematics and Information Technology (IMATI) in Pavia, and the University of Milan, both in Italy. He talked about discretization methods that preserve or mimic important mathematical and physical properties of underlying PDEs. At LANL, such methods are developed by the group of Mikhail Shashkov as part of ASCR Applied Mathematics Research Project, "Mimetic Finite Difference Methods for Partial Differential Equations."

The team of applied mathematicians in the IMATI is one of the strongest group in Europe in the area of numerical analysis of PDEs. The head of this group, also the Director of IMATI, Prof. .Franco Brezzi, was recognized recently as one of the most cited applied mathematicians in the world. During his one-week invited visit to IMATI, Lipnikov discussed how to combine LANL's and IMATI's efforts in analysis and development of new mimetic discretization methods for a broader range of applications.

Konstantin Lipnikov, lipnikov@lanl.gov or
Mikhail Shashkov, shashkov@lanl.gov

 

 

ASCR

 

 

 

 

Last modified: 3/18/2013 10:12:40 AM