Skip to content

Workshop Programme

for period 22 - 25 October 2012

Weather and climate prediction on next generation supercomputers

22 - 25 October 2012

Timetable

Monday 22 October
12:35-13:30 Welcome and Opening
13:30-14:15 Nigel Wood (Met Office)
  The Dynamical Core of the Met Office Unified Model: the challenge of future supercomputer architectures
 

This decade is set to be an interesting one for operational weather and climate modelling. The accuracy of weather forecasts has reached unprecedented and probably unexpected levels: large-scale measures of accuracy continue to improve at the rate of 1 day every 10 years so that today's 3 day forecast is as accurate as the 1 day forecast was 20 years ago.

In order to maintain this level of improvement operational centres need to continue to increase the resolutions of their models. Increasingly this means running models at resolutions of the order of a kilometre. This leads to many challenges. One is how to handle processes that are only barely resolved at those scales. Another is how to present, and also verify, forecasts that are inherently uncertain due to the chaotic nature of the atmosphere.

A more practical issue though is simply how to run the models at these increased resolutions! To do so requires harnessing the power of some of the world's largest supercomputers which are entering a period of radical change in their architecture.

That challenge is made more difficult by the fact that the UK Met Office's model (the MetUM) is unified in that the same dynamical core (and increasingly also the same physics packages and settings) is used for all our operational weather and climate predictions. The model therefore has to perform well across a wide range of both spatial scales [O(10^0)-O(10^4)km] and temporal scales [O(10^0)-O(10^4) as well as a wide range of platforms.

This talk will start by outlining the current status of the MetUM, then discuss planned developments (focussing on numerical aspects) before going on to highlight recent progress within GungHo! - the project that is redesigning the dynamical core of the model.

 
14:15-14:40 Mikhail Tolstykh (Russian Academy of Sciences)
  Development of the next generation SLAV global atmospheric model
 

SLAV is the global finite-difference semi-Lagrangian numerical weather prediction model used operationally at Hydrometcentre of Russia. Its features are the use of vorticity-divergence formulation (in horizontal plane) on the unstaggered grid and 4th-order finite differences. The current version in work has the resolution of (0.18-0.22) degrees in latitude, 0.225 degrees in longitude, 51 levels. The presentation covers two topics: 1. Further development of the existing hydrostatic version. This includes: - Implementation of the mass-conserving semi-Lagrangian advection on the reduced lat-lon grid. This is a 3D extension of (Tolstykh, Shashkin, JCP 2012). Some preliminary results will be shown. - Recent increase of code scalability from 160 to more than 800 cores. - Work on further increase of scalability. 2. Plans for development of the global nonhydrostatic model. We plan to test a parallel elliptic solver (the code developed at INM) on different massively parallel platforms with a matrix arising in the semi-implicit discretization of MC2-type nonhydrostatic model formulation. Depending on the results, we will choose between the semi-implicit and horizontally explicit-vertically implicit (HE-VI) time integration schemes. The potential choice of HE-VI would imply the radical changes in the next generation of our dynamical core. These changes will be also discussed in the presentation.

 
14:40-15:10 Afternoon Coffee
15:10-15:55 Nils Wedi (ECMWF)
  ECMWF's roadmap for non-hydrostatic modelling, existing and future challenges and recent progress in solving these
16:00-16:25 Bill Skamarock (National Center for Atmospheric Research)
  A cell-integrated SLSI shallow-water model with conservative and consistent mass and scalar mass transport
16:25-16:45 Afternoon Tea
16:45-17:20 Richard Loft (UCAR)
  G8 ECS: Enabling climate simulation at extreme scale
17:30-18:30 Welcome reception
Tuesday 23 October
09:30-10:05 Günther Zängl (Deutscher Wetterdienst)
  The ICON model and its relationship to the ICOMEX project
 

The ICON (ICOsahedral Nonhydrostatic) model is formulated on an unstructured icosahedral-triangular C-grid and applies several levels of optimization for efficient use on future computer architectures. The time-stepping scheme of the dynamical core is a horizontally fully explicit predictor-corrector scheme with implicit treatment of vertically propagating sound waves only. This way, only nearest-neighbour communication is required at runtime if the optional global diagnostics are turned off. The decision for a fully explicit dynamical core in favour of a split-explicit one is based on the fact that a global model extending into the mesosphere needs to be numerically stable up to wind speeds well in excess of 200 m/s because such high extrema can be reached in breaking gravity waves in the mesosphere. The ratio between maximum wind speed and sound speed is thus too small to benefit from a split-explicit approach. Time splitting is applied instead between the dynamical core and tracer transport and the physics parameterizations, the time step ratio being usually 4 or 5. Further optimizations include a dedicated radiation grid for which at most 60% of the grid points belonging to a given processor are sunlit at once, and an option to turn off moist physics and the transport of cloud an precipitation variables from the lower stratosphere upwards. Lower-level optimizations include a variable inner loop length for cache blocking or, alternatively, efficient use of vector architectures, a directive-based option to optimize the loop order for indirectly addressed operations, placement the halo grid points at the index vector with an ordering according to their halo level. Substantial improvements are still needed in the field of memory scaling, particularly regarding the setup phase of the domain decomposition, and for I/O, which so far is asynchronous but not yet parallelized. ICON is one of the models participating in the ICOMEX (ICOsahedral-grid Models for EXascale earth-system simulations) project, which is dedicated at optimizing several key components of our modelling systems for future computer architectures. The sub-projects include a domain-specific language approach to optimize the memory order of the array variables for a variety of platforms, parallel internal postprocessing, parallelization concepts for I/O and optimized data formats, usage of GPUs and optimization of Helmholtz equation solvers needed for implicit time-stepping schemes. In addition, a continuous model intercomparison effort is made in order to systematically analyze the strengths and weaknesses of the participating models with respect to computational efficiency and scientific aspects (accuracy, conservation properties etc.), with the goals to learn from each other and to iteratively improve the identified weaknesses in each participating model.

 
10:05-10:40 Todd Ringler (LANL)
  Jones and Jacobsen "Can Models Built upon Unstructured Grids be Computationally Competitive on Emerging High-Performace Architectures?
10:40-11:00 Morning Coffee
11:00-11:45 Michael Baldauf (Deutscher Wetterdienst)
  Limited area model weather prediction using COSMO with emphasis on the numerics of dynamical cores (including some remarks on the NEC vector supercomputer)
 

The current developments of the dynamical core of the COSMO model will be described. These are mainly a consolidated version of the fast waves solver in the split-explicit (horizontally explicit – vertically implicit, HE-VI) time-integration framework. The new formulation contains the use of an improved vertical discretisation. A discretisation error analysis shows the need for weighted averages in strongly stretched staggered (Lorenz) grids, in particular for the divergence operator. The use of the strong conservation form for the divergence operator potentially increases again the accuracy of metric correction terms. Further use of a Mahrer (1984) discretisation of horizontal pressure gradients allows the stable integration of steeper slopes compared to the traditional terrain following formulation. The experiences during this development with our NEC vector computer will be discussed, too. A new test case for models using the compressible non-hydrostatic Euler equations was defined, which allows the derivation of an analytic solution. This solution is exact in the sense that it can be used for convergence studies of compressible models. The new fast waves solver is tested against this test case. Another development branch in COSMO concerns the improvement of both the conservation properties of the dynamical core and the ability to handle steep slopes. For this purpose the usability of the anelastic Lipps, Hemler (1982) equation set and the discretisation of the EULAG model are considered. In the framework of the ‘Metström’ priority program of the German Research Community (DFG) the ‘Discontinous Galerkin’ method is inspected as another possible option of a future dynamical core for COSMO.

 
11:45-12:10 John Thuburn; Colin Cotter (Exeter/Imperial College)
  A primal-dual mixed finite element method for accurate and efficient atmospheric modelling on massively parallel computers
 

Efficient modelling of the atmosphere using massively parallel computers will require a quasi-uniform grid to avoid the communication bottleneck associated with the poles of the traditional latitude-longitude grid. However, achieving an accurate solution on a quasi-uniform grid is non-trivial. A mixed finite element method can provide the following desirable properties: mass conservation; a C-grid-like placement of variables for accurate wave dispersion and adjustment; vanishing curl of gradient; linear energy conservation; and steady geostrophic modes in the linear f-plane case. A further desirable property is that the potential vorticity (PV) should evolve as if advected by some chosen (accurate) advection scheme. This can be achieved by inserting the PV fluxes into the nonlinear Coriolis term that appears in the `vector invariant' form of the momentum equation, provided the PV fluxes themselves can be constructed. Introducing a dual family of function spaces, in which the PV lives in a piecewise constant function space, along with suitable maps between primal and dual spaces, provides a convenient framework in which the PV fluxes can be computed by a finite volume advection scheme in the dual space.

The scheme can be implemented in terms of a small number of sparse matrices that can be precomputed off-line, avoiding the need for numerical quadrature at run time. A mass matrix and two dual-primal mapping operators need to be inverted at each time step, but these are well conditioned and the inversion can be absorbed into the iterative solver used for implicit time stepping at only a modest increase in cost. Some sample shallow water model results on a hexagonal icosahedral grid and a cubed sphere grid will be presented.

 
12:10-12:35 Jean Côté (UQAM)
  Time-parallel algorithms for weather prediction and climate simulation
 

The forecast of weather relies on computer models that need to be executed in real-time, meaning that a forecast needs to be disseminated to users well before the time period for which it is made. A challenge in the future will be to succeed in using the computing power available in massively parallel high-performance computers and meet the real-time requirement. Until now weather forecast and related climate simulation models have taken advantage of the parallelism of the computers by dividing the task to be performed in the horizontal space dimensions.

The purpose of this work is to develop algorithms that allow also parallelism in the time dimension. This increased parallelism should allow an acceleration of the execution time of weather and climate models. This acceleration in turn permits an increase in the space accuracy of models while still meeting the real-time requirement. The talk will present our preliminary work on the "Parareal" algorithm that has been developed for that purpose (Lions et al., 2001) and whose applications to date have included among others air quality, but ignored weather forecasting.

Weather forecasting presents a challenge for the method because of the presence of waves and advection. An important question is to examine how the traditional way to accelerate models with the semi-implicit semi-Lagrangian methodology can be advantageously blended with the “Parareal” approach.

 
12:35-13:30 Lunch
13:30-14:15 Henry Weller (OpenCFD Limited)
  Addressing unstructuredness and hardware and software divergence
14:15-14:40 Francis X. Giraldo; James F. Kelly; Shiva Gopalakrishnan; Michal Kopera; Lester Carr III (Naval Postgraduate School)
  Development of a Nonhydrostatic Unified Atmospheric Model (NUMA) on Multi-Core and Many-core Computer Architectures
 

We have been developing a nonhydrostatic atmospheric model based on the fully compressible Euler equations for applications in both local and global atmospheric modeling. This new model, NUMA, has been designed to be unified in terms of: the class of problems that it can solve (i.e., local and global modeling); the class of numerical methods that it uses in space (i.e., continuous AND discontinuous Galerkin methods); the class of time-integrators that it can use (i.e., Implicit-Explicit methods using multi-step and multi-stage methods); the types of iterative solvers and preconditioners that it contains (an entire suite of methods such as GMRES, BiCGStab, Chebyshev, etc.); and the types of computer architectures that it is targeted for (e.g., multi-core and many-core/heterogeneous computing). In this presentation, we shall touch on all the highlights listed above and describe the current status of the model especially focusing on the performance of this new model.

 
14:40-15:10 Afternoon Coffee
15:10-15:55 Andy Grant (IBM)
  tba
16:00-16:25 David Ham; Patrick E. Farrell; Simon W. Funke; Marie E. Rognes (Imperial College London)
  Fully automatic adjoints: a robust and efficient mechanism for generating adjoint dynamical cores
 

The problem of generating and maintaining adjoint models is sufficiently difficult that typically only the most advanced and well-resourced community ocean models achieve it. There are two current technologies which each suffer from their own limitations. Algorithmic differentiation, also called automatic differentiation, is very difficult to apply to existing code, and requires a major initial investment to prepare the code for automatic adjoint generation. AD tools may also have difficulty with code employing modern software constructs such as derived data types. An alternative is to formulate the adjoint differential equation and to discretise this separately. This has the disadvantage that two different model code bases must be maintained that the discretisation of the continuous adjoint is not automatically consistent with that of the forward model, producing an additional source of error. The alternative presented here is to formulate the flow model in the high level language UFL (Unified Form Language) and to automatically generate the model using the software of the FEniCS project. In this approach it is the high level code specification which is differentiated, a task very similar to the formulation of the continuous adjoint. However since the forward and adjoint models are generated automatically, the difficulty of maintaining them vanishes and the software engineering process is therefore robust. The scheduling and execution of the adjoint model, including the application of an appropriate checkpointing strategy is managed by libadjoint. In contrast to the conventional algorithmic differentiation description of a model as a series of primitive mathematical operations, libadjoint employs a new abstraction of the simulation process as a sequence of discrete equations which are assembled and solved. It is the coupling of the respective abstractions employed by libadjoint and the FEniCS project which produces the adjoint model automatically.

 
16:25-16:45 Afternoon Tea
16:45-17:20 Peter Jimack (Leeds)
  On the Development of Implicit Solvers for Time-Dependent Systems
 

This presentation will describe some of our recent experiences in the development of efficient implicit solvers for systems of nonlinear time-dependent partial differential equations. These experiences are for different applications to weather and climate prediction, so the primary aim of the presentation is to stimulate discussion on some of the challenges and opportunities associated with the development of efficient implicit solvers. The main issues that will be addressed centre around the fast solution of the discrete algebraic systems arising at each time step: with a focus on multilevel solution methods and their parallel implementation. If time permits the further issues of adaptivity in time and space will also be considered.

 
Wednesday 24 October
09:30-10:05 John McGregor (CSIRO)
  Cube-based atmospheric GCMs at CSIRO
 

Two cube-based atmospheric GCMs have been developed at CSIRO, the Conformal-Cubic Atmospheric Model (CCAM) and, more recently, the Variable Cubic Atmospheric Model (VCAM). The design of the dynamical cores of both models will be described and compared. CCAM is formulated on the conformal-cubic grid, and employs 2-time-level semi-Lagrangian semi-implicit numerics. On the other hand, VCAM is cast on the highly-uniform equiangular gnomonic-cubic grid and employs a split-explicit flux-conserving approach, which provides benefits for modelling trace gases. Both models use reversible staggering for the wind components (McGregor, MWR, 2005) to produce good wave dispersion behaviour. The models use the same physics package. CCAM includes the Miller-White nonhydrostatic treatment, whereas VCAM is presently an hydrostatic model.

Both models use an efficient MPI message-passing strategy. Although VCAM avoids the message-passing overheads necessitated by the Helmholtz solver of CCAM, it instead has some minor overheads related to more frequent calls to the wind staggering/unstaggering routines. Timings will be shown for simulations utilising up to 288 processors.

Comparative model performance will be shown for idealized advection tests, the Held-Suarez test case, aquaplanet simulations, and for AMIP simulations.

 
10:05-10:40 Stephane Popinet (NIWA)
  Quadtree-adaptive global atmospheric modelling on parallel systems
 

I will present initial results for a three-dimensional, hydrostatic, global atmospheric model combining quadtree, horizontal adaptivity on a cubed sphere grid with a standard, layered vertical discretisation. This model is implemented within the Gerris framework (http://gfs.sf.net) and thus automatically inherits attributes such as data parallelism and load-balancing. For large-scale atmospheric simulations, I will show that quadtree-adaptivity leads to a large gain in the scaling exponent relating computational cost to the resolution of sharp frontal structures.

 
10:40-11:00 Morning Coffee
11:00-11:45 Mark Taylor (Sandia)
  High resolution and variable resolution capabilities of the Community Atmosphere Model (CAM) with a spectral finite element dynamical core
 

I will describe our work developing CAM-SE, a highly scalable version of the Community Atmosphere Model (CAM) running with the spectral element dynamical core from NCAR's High-Order Method Modeling Environment. For global 1/4 and 1/8 degree resolutions CAM-SE runs efficiently on hundreds of thousands of processors on modern supercomputers and obtains excellent simulation throughput. CAM-SE also supports fully unstructured conforming quadrilateral grids. I will show results using a variable resolution grid with 1/8 degree resolution over the central U.S., transitioning to 1 degree over most of the globe. We hope that the variable resolution can provide a 10-100 times more efficient way to calibrate and evaluate the CAM 1/8 degree configuration.

CAM-SE uses quadrilateral elements and tensor-product Gauss-Lobatto quadrature. Its fundamental computational kernels look like dense matrix-vector products which map well to upcoming computer architectures. It solves the hydrostatic equations with a spectral element horizontal descritization and the hybrid coordinate Simmons & Burridge (1981) vertical discretization. It uses a mimetic formulation of spectral elements which preserves the adjoint and annihilator properties of the divergence, gradient and curl operations. These mimetic properties result in local conservation (to machine precision) of mass, tracer mass and (2D) potential vorticity, and semi-discrete conservation (exact with exact time-discretization) of total energy. Hyper-viscsoity is used for all numerical dissipation.

 
11:45-12:10 Colin M. Zarzycki; Christiane Jablonowski (University of Michigan)
  Evaluating Variable-Resolution CAM-SE as a Numerical Weather Prediction Tool
 

The global modeling community has traditionally struggled simulating meso-alpha and meso-beta scale (25-500 km) systems in the atmosphere such as tropical cyclones, strong fronts, and squall lines. With traditional General Circulation Model (GCM) resolutions of 50-300 km, these features have been under-resolved and require significant parameterization at the sub-grid scale. In an effort to help alleviate these issues, the use of limited area models (LAMs) with high resolution has become popular, although, by definition, these models typically lack two-way communication with the exterior domain. Variable-resolution global dynamical models can serve as the bridge between traditional global forecast models and high-resolution LAMs by applying fine grid spacing in areas of interest. These models can utilize existing computing platforms to model high resolutions on a regional basis while maintaining global continuity, therefore eliminating the need for externally-forced and possib ly numerically and physically inconsistent boundary conditions required by LAMs.

A statically-nested, variable-mesh option has recently been introduced into the National Center for Atmospheric Research (NCAR) Community Atmosphere Model's (CAM) Spectral Element (SE) dynamical core. We present short-term CAM-SE model simulations of historical tropical cyclones and compare the model's prediction of storm track and intensity to other global and regional models used operationally by hurricane forecast centers. Additionally, we explore the model's ability to simulate other weather phenomenon traditionally unavailable to global modelers such as mesoscale convective systems and precipitation lines associated with frontal passages. We also discuss the performance of existing parameterizations in CAM with respect to high-resolution modeling as well as consider the potential computational benefits in using a variable-resolution setup as an operational tool for both weather and climate prediction.

 
12:10-12:35 Marcus J Thatcher; John L McGregor (CSIRO)
  A prototype model for coupled simulations of regional climate suitable for massively parallel architectures
 

The formulation of regional climate models has been undergoing major changes, including advances in variable-resolution models and attempts to simulate regionally the coupled atmosphere-ocean system. This talk outlines the design of a prototype global variable-resolution, coupled atmosphere-ocean model. Although the grid can be smoothly deformed into a global simulation, the climate model has been optimised for regional simulations where the grid can be focused over a specified location using a Schmidt transformation. Both atmosphere and ocean dynamical cores employ a reversible staggerring between Arakawa A and C grids. Theoretically this approach can produce very good dispersion characteristics for both atmosphere and ocean models. The performance of the model scales well for 300+ processors and is expected to be suitable for massively parallel architectures, as the approach avoids latency problems associated with mismatched atmosphere and ocean grids. Furthermore, th e approach could be appropriate for global climate models if computing resources are increased by a factor of 10 with the next generation of supercomputers. We can suppress error growth on the coarser regions of the variable-resolution grid by downscaling with a system of scale-selective filters, where the filters use an efficient convolution-based approach that can operate with non-periodic boundary conditions and irregular coastlines, in the case of the ocean model. Some preliminary results are presented for practical applications of the model simulating regional climate, as well as a discussion of the algorithms used for the reversible staggering and the scale-selective filters.

 
12:35-13:30 Lunch
13:30-14:15 Tom Edwards (Cray Inc.)
  Earth system modeling strategies on extreme scale architectures
 

Achieving the highest possible performance capability is a key requirement for earth system modeling community. Extreme scale architectures, including those currently available, provide opportunities for the advancement of simulation capabilities and present challenges for the HPC community as a whole. A number of significant factors have been identified in the development and deployment of Exascale systems. Approaches to address these challenges will strongly influence modeling strategies. As we enter into the Exascale era, a determinant for success will be greater levels of cooperation between model developers and the broader HPC community. Several modeling groups have already engaged in co-development approaches to identify and address factors limiting performance, scalability and efficiency. A further consideration moving forward will be the impact on HPC architectures and workflows as the science community becomes increasingly engaged with data intensive approaches.

 
14:15-14:40 Eike Mueller; Robert Scheichl (University of Bath)
  Scalability of Elliptic Solvers in Numerical Weather and Climate Prediction
 

Numerical weather- and climate- prediction requires the solution of elliptic partial differential equations with a large number of unknowns on a spherical grid. In particular, if implicit time stepping is used in the dynamical core of the forecast model, an elliptic PDE has to be solved at each time step. This often amounts to a significant proportion of the model runtime. The goal of the Next Generation Weather and Climate Prediction (NGWCP) project is the development of a new dynamical core for the UK Met Office Unified Model with a significantly increased global model resolution, resulting in more than $10^{10}$ degrees of freedom for each atmospheric variable. To run the model operationally, the solver has to scale to hundreds of thousands of processor cores on modern computer architectures.

To investigate the scalability of the implicit time stepping algorithm we have tested and optimised existing solvers in the Distributed and Unified Numerics Environment (DUNE) and the Hypre library. In addition we also implemented a matrix-free parallel geometric multigrid code with a vertical line smoother. We demonstrate the scalability of the solvers on up to 65536 cores of the Hector supercomputer for a system with $10^{10}$ degrees of freedom for the elliptic PDE arising from semi-implicit semi-Lagrangian time stepping.

To identify the most promising solver we investigated the robustness of simple and widely used preconditioners, such as vertical line relaxation, and more advanced multigrid methods. We compared algebraic- and matrix-free geometric multigrid algorithms to quantify the matrix- and coarse-grid- setup costs and studied the performance of various solvers on different computer architectures.

 
14:40-15:10 Afternoon coffee
15:10-15:55 Max Gunzberger (Florida State)
  Parallel Algorithm for Spherical Delaunay Triangulations and Spherical Centroidal Voronoi Tessellations
16:00-16:25 Robert Scheichl (University of Bath)
  Multilevel Markov-Chain Monte Carlo Methods for Large Scale Problems
 

Monte Carlo methods play a central role in stochastic uncertainty quantification and data assimilation. In particular Markov chain Monte Carlo methods are of great interest also in the atmospheric sciences. However, they are notorious for their slow convergence and high computational cost. In this talk I will present revolutionary recent developments to mitigate and overcome this serious problem using a novel multilevel strategy and deterministic sampling rules. The talk will focus on methodology. The applications are so far mainly coming from other fields.

 
16:25-16:45 Afternoon tea
16:45-17:20 Richard Loft (UCAR)
  Meeting the Challenge of Many-core Architectures in Weather and Climate Models
 

Many-core processor systems such as GPU's and Intel's Xeon Phi achieve higher theoretical performance and improved power efficiency by a trading a decrease in clock speed for an increase in the number of compute threads. The questions relevant to this meeting are: 1) Do these architectures offer real benefits in performance over conventional multiprocessors for climate and weather applications? 2) If so, is it worth refactoring these large, complex applications to achieve these benefits? Over the past few years, many weather and a few climate groups around the world have been trying to answer these questions. This talk will survey the their progress and experiences, as presented by them at a series of many-core workshops held at NCAR over the past two years.

Specific topics will include: the right and wrong way to measure, report, and think about many-core performance; assessment of the various programming paradigms currently available for the processor + many-core accelerator architecture; experience with different compilers and tools; and the viability of the code refactoring strategies for many-core processors that have been tried.

 
19:30-22:00 Workshop dinner
Thursday 25 October
09:30-10:05 Pier Luigi Vidale (Reading)
  Porting and optimisation of the Met Office Unified Model on Petascale architectures
 

We present porting, optimisation and scaling results from our work with the United Kingdom's Unified Model on a number of massively parallel architectures: the UK MONSooN and HECToR systems, the German HERMIT and the French Curie supercomputer, part of the Partnership for Advanced Computing in Europe (PRACE). The model code used for this project is a configuration of the Met Office Unified Model (MetUM) called Global Atmosphere GA3.0, in its climate mode (HadGEM3, Walters et al., 2011, and Malcolm et al., 2010). The atmospheric dynamical core uses a semi-implicit, semi-Lagrangian scheme. The model grid is spherical (a lat/lon grid) and polar filtering is applied around the two singularities. For the configuration used on PRACE, with a horizontal grid spacing of 25km (N512) and 85 vertical levels up to 85km, we use a 10-minute time step. Initial conditions are derived from fully balanced coupled experiments at lower resolution and atmosphere/land surface perturbations are imposed using standard Met Office tools for ensemble initialisation. Initial development occurred on a NERC-MO joint facility, MONSooN, with 29 IBM-P6 nodes, using up to 12 nodes. In parallel with this activity, we have tested the model on the NERC/EPSRC supercomputer, HECToR (CRAY XE6), using 1'536 to 24'576 cores. The scaling breakthroughs came after implementing the use of hybrid parallelism: OpenMP and MPI. The N512 model scales effectively up to 12'244 cores and has now been successfully ported to PRACE TIER-0 systems (Curie and HERMIT), where it is operated in ensemble mode. Current developments include extensions to 17km and 12km grid spacing (N768 and N1024), which make use of up to 96 nodes on the new Met Office IBM-P7 system. The use of the next UM dynamical core, "EndGame", offers scaling improvements, with good performance on twice the current amount of cores, by altering the horizontal and vertical grid stagger, as well as eliminating the need for polar filtering.

 
10:05-10:30 Chris Budd (University of Bath)
  Adaptivity using moving meshes
10:30-10:55 Michal A. Kopera; Frank X. Giraldo (Naval Postgraduate School)
  Adaptive mesh refinement for a 2D unified continuous/discontinuous Galerkin Non-hydrostatic Atmospheric Model
 

The adaptive mesh refinement techniques for element-based Galerkin methods are becoming a strong candidate for future numerical weather prediction models. Particular attention has been paid to the discontinuous Galerkin method [1], [2], [3] as it avoids global assembly of data and makes the implementation of the algorithm easier. In this presentation we will focus on the extension of the 2D discontinuous Galerkin, quad-based non-conforming adaptive mesh refinement algorithm to a continuous Galerkin formulation. The novelty of this approach is that we propose to do this within a unified CG/DG nonhydrostatic atmospheric model that we call NUMA (Nonhydrostatic Unified Model of the Atmosphere). NUMA is equipped to handle AMR at various levels: IMEX time-integrators are used to be able to use large time-steps and a new class of preconditioners [4] have been specifically designed to handle the IMEX methods with AMR.

[1] A. Muller, J. Behrens, F.X. Giraldo, V. Wirth (2011). An Adaptive Discontinuous Galerkin Method for Modelling Atmospheric Convection. Defense Technical Information Center Report, http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA546279

[2] S. Blaise, and A. St-Cyr (2011). A Dynamic hp-Adaptive Discontinuous Galerkin Method for Shallow-Water Flows on the Sphere with Application to a Global Tsunami Simulation, Monthly

[3] M. A. Kopera and F.X. Giraldo (2012). AMR for a 2d DG Nonhydrostatic atmospheric model, in preparation.

[4] L.E. Carr, C.F. Borges, and F.X. Giraldo (2012). An element-based spectrally-optimized approximate inverse preconditioner for the Euler equations, SIAM J. Sci. Comp. (in press).

 
10:55-11:15 Morning coffee
11:15-11:40 Sarah-Jane Lock (University of Leeds)
  Linear analyses of RK IMEX schemes for atmospheric modelling
11:40-12:05
  tba

Back to top ∧