Prospects of High Performance Computing in Turbulence Research
Friday 26th September 2008
|13:30 to 14:15||
J Jimenez ([Universidad Politecnica, Madrid])
What are we going to need to keep computing turbulence, and what can we get in return
Direct numerical simulation has been one of the primary tools of turbulence research in the last two decades. Since the first simulations of low-Reynolds-number turbulent flows appeared in the early 1980's, the field has moved to higher Reynolds numbers and to more complex flows, until finally overlapping the lower range of laboratory experiments. This has provided a parametric continuity that can be used to calibrate experiments and simulations. It has turned out that, whenever both are available, simulations are usually the more consistent, mainly because they have essentially no instrumental constraints, and because the flow parameters can be controlled much more tightly than in the laboratory (although both are not necessarily equivalent). Perhaps more important is that simulations afford a degree of control over the definition of the physical system that is largely absent in the laboratory, allowing flows to be studied "piecemeal", and broken into individual "parts". We are now at the point in which these techniques can be applied to flows with nontrivial inertial cascades, thus providing insight into the "core" of the turbulence machinery. This has been made possible by the continued increase in computer power, which has roughly doubled every year, providing every decade a factor of 1000 in computing speed, and a factor of ten in Reynolds number. Software evolution has also been important, and will continue to be increasingly so. Numerical schemes have changed little, typically relying on high-resolution methods which require smaller grids, but hardware models have moved from vector, to cache-based to highly parallel, all of which have required major reprogrammings. Lately, most of the speed-up has come from higher processor counts and finer granularities, which interfere with the wide stencils of high-resolution methods. The trend towards complex flow geometries also pushes numerical schemes towards lower orders. This may bring at least a temporary slow-down in the rate of increase of Reynolds numbers. Moving from spectral to second-order methods typically requires a factor of three in resolution, or a factor of 80 in operation count. This is about six years of hardware growth. Another limiting factor is data storage and sharing. Typical simulations today generate Terabytes of data, which have to be archived, postprocessed, and shared with the community. This will increase to Petabytes shortly, especially if low-resolution grids are required. There are at present few archival high-availability methods for these data volumes, all of them expensive, and essentially no way to move the data among groups. Problems of this type have been common during the last two decades, and they have been solved. They will no doubt also be solved now, but they emphasise that simulations, although by now an indispensable tool of turbulence research, will continue to be far from routine for some time.
|14:15 to 15:00||
T Zacharia (Oak Ridge National Laboratory)
Impacts and opportunities for leadership computing
Energy issues are central to the most important strategic challenges facing the United States and the world. The energy problem can be broadly defined as providing enough energy to support higher standards of living for a growing fraction of the world’s increasing population without creating intractable conflict over resources or causing irreparable harm to our environment. It is increasingly clear that even large-scale deployment of the best, currently-available, energy technologies will not be adequate to successfully tackle this problem. Substantial advances in the state of the art in energy generation, distribution, and end use are needed. It is also clear that a significant and sustained effort in basic and applied research and development (R&D) will be required to deliver these advances and ensure a desirable energy future. It is in this context that high-performance computing takes on a significance that is co-equal with theory and experiment. As computing enters the petascale, a capability that until recently was beyond imagination is now poised to address these critical problems. Oak Ridge National Laboratory is home to two supercomputer centers funded by the U.S. Department of Energy and the National Science Foundation. The world-leading petascale computers that are now being deployed will make it possible to solve R&D problems of importance to a secure energy future.
|15:00 to 15:20||Tea|
|15:20 to 16:05||
M Yokokawa (RIKEN)
A role of spectral turbulence simulations in developing HPC systems
Since the advent of supercomputers, numerical simulations for complicated phenomena have been made possible by applying their powerful computational capability. They gave it outstanding contributions to reveal the unknown in the wide variety of science and engineering fields, especially in turbulence. The spectral method has a large number of floating point operations in kernel loops and therefore requires high memory bandwidth between CPU and memory, as well as CPU performance. Moreover, since data transposition of 3-dimensional data array among parallel elements appears in parallel computation of the method, high bi-sectional bandwidth of inter-element network is also required. Therefore, it is the essential and important method to consider the HPC systems The recent trend of HPC systems which have more than ten thousand of parallel computational elements with low peak performance and low electricity, however, brings us some difficulties such as fine-grain parallelisation and low efficiency of computation in using HPC systems for the turbulence simulations. Longer simulation time will be requested if the systems have great peak performance like PFLOPS class. The trade-off between high performance capability and low electric power is essential issue in designing the HPC systems. We will discuss a possibility of higher resolution turbulence simulations by referring a recent trend of HPC systems and a development project of HPC system in Japan.
|16:05 to 16:50||
PK Yeung ([Gatech])
Extreme scaling in turbulence simulations: challenges and opportunities for the research community
Current trends in developments towards Petascale computer hardware have pointed to the importance of developing algorithms in various fields of science capable of scaling up to extremely large numbers of parallel processing elements. We present performance benchmarking data for a turbulence simulation code based on a domain decomposition technique that allows the use of up to $N^2$ cores on $N^3$ periodic domain, with the largest test to date being at $N=8192$ on 32768 cores. While significant technical challenges remain in the path towards true Petascale performance in more complex geometries, we will discuss a range of opportunities for sharing both data and codes with the research community in order to maximize the scientific benefits of continuing rapid advances in computing power.
|16:50 to 17:30||Discussion||INI 1|