skip to content
 

An Inter-Comparison of Icosahedral Climate Models on the G8 Call: ICOMEX Project

Presented by: 
Ryuji Yoshida [RIKEN/AICS]
Date: 
Friday 28th September 2012 - 13:55 to 14:20
Venue: 
INI Seminar Room 1
Session Title: 
Novel Optimisation Techniques
Abstract: 
The ICOsahedral-grid Models for EXascale Earth system simulations (ICOMEX) is the consortium for the climate models development toward the exascale computing. It started since October 2011 as one of the G8 call projects. Participated are NICAM (Japan), ICON (Germany), MPAS (UK and US), and DYNAMICO (France) model teams. On the road to the exascale computing, we would find many road blocks, such as file I/O speed, lower byte/flops rate, outer/inner node communication. In order to examine these problems, six working groups are seated together. The Japan team works on the model inter-comparison to be synergistic among working groups. Although all the participated models use the icosahedral-grid, the discretization methods are different. For example, the hexagonal shape of a control volume is used in NICAM, while the triangular shape is used in ICON. Through intercomparison of both computational and physical performances between models, we will find which aspects of the model configuration are advantagous toward exascale computing. Two types of experiments were performed until now using NICAM and ICON: the baroclinic wave test (Jablonowski and Williamson, 2006) and the statistical climatology test (Held and Suarez, 1994). The horizontal resolution is from 240km (glevel-5) to 14km (glevel-9), and higher resolution runs will be tested in future. For the Held and Suarez test case, NICAM and ICON simulated climatology similar to that shown in the original paper. The baroclinic wave test is also compared between the two models. To investigate computational aspects, we examined strong scaling of parallel computing. Tests were performed on the Westmere and Bulldozer machine by using 5 through 40 MPI processes. We found that the two models have a good scaling: the measured scaling efficiency is 0.8-0.9. We plan to perform the intercomparison experiments using K computer using a larger number of processes O(10^5) to examine detail profiles of very massive parallel cores.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
University of Cambridge Research Councils UK
    Clay Mathematics Institute The Leverhulme Trust London Mathematical Society Microsoft Research NM Rothschild and Sons