# Workshop Programme

## for period 22 - 24 August 2012

### Adaptive Multiscale Methods for the Atmosphere and Ocean

22 - 24 August 2012

Timetable

Wednesday 22 August | ||||

13:00-13:30 | Welcome and introduction to programme and workshop | |||

13:30-14:30 | Piggott, M (Imperial College London) |
|||

Modelling geophysical fluid dynamics with anisotropic adaptive mesh methods | Sem 1 | |||

Many geophysical fluid dynamics problems include large variations in spatial scales that are important to resolve in a numerical simulation. An important example being the global ocean where dynamics at spatial scales of thousands of kms has strong two-way coupling with processes occurring at the km, and sub-km, scale, e.g. boundary layers, eddies and buoyancy driven flows interacting with bathymetry. Adaptive and unstructured mesh methods represent a possible means to simulate these multi-scale systems efficiently. In addition, smaller scale processes often have high aspect ratios and hence anisotropic mesh methods should be considered. In this talk our work applying anisotropic adaptive methods to geophysical fluid dynamics problems will be reviewed. A series of recent applications will also be presented. |
||||

14:30-15:00 | Steppeler, J (Deutscher Wetterdienst (DWD)) |
|||

Variable resolution and uniform second and third Order approximation order | Sem 1 | |||

Approximations on polygonal grids, such as cube sphere or icosahedral require a slightly irregular grid, as for high resolution no uniform polygonal cover by cells is possible. For variable resolution sudden refinement is considered not possible by some authors and a gradual change of resolution is preferred,such as with the new MPAS model of NCAR. It will be shown that such problems can be traced back to a decrease of approximation order to 1 in points where the resolution is irregular. Examples will be given to show that a jump of resolution is possible without problems when care is taken that in such points an approximation order 2 or 3 is maintained. |
||||

15:00-15:30 | Afternoon Tea | |||

15:30-16:30 | Giraldo, F (Naval Postgraduate School) |
|||

Development of the Nonhydrostatic Unified Model of the Atmosphere (NUMA): a unified model for both local-area modeling and global modeling | Sem 1 | |||

In this talk I will give an overview of the Nonhydrostatic Unified Model of the Atmosphere (NUMA). NUMA solves the fully compressible nonhydrostatic equations with the goal to have it unified across various fronts including: applications (local-area and global modeling); numerics (using both continuous and discontinuous Galerkin methods); time-integration (explicit and implicit-explicit methods); iterative solvers and preconditioners (using a suite of these); grid generation (using both conforming and non-conforming grids); and finally, parallelization (using both CPU and GPU based parallelization). In this talk, we will describe what we mean by each of these components and will report on the status of each of these components. The work that we shall describe will set the stage for the work we wish to carry out during the stay of my group at the Newton Institute. |
||||

16:30-17:00 | Taylor, M (Sandia National Laboratories) |
|||

Variable resolution experiments using CAM's spectral finite element dynamical core | Sem 1 | |||

Much recent work in the Community Earth System Model (CESM) and its atmosphere component (CAM) has been devoted to developing higher resolution configurations, motivating new dynamical cores which can use unstructured quasi-isotropic grids. These dynamical cores also allow for variable resolution configurations in CAM, but making use of variable resolution (adaptive or statically refined grids) is difficult due to the strong resolution sensitivity of CAM's many subgrid physical parameterizations. Here we will describe our work with statically refined grids in CAM, using CAM's spectral finite element dynamical core. This work is supporting the development of a model/observation "test bed". Test-beds combine models, observations and uncertainty quantification methodologies in order to evaluate existing models, quickly develop and test new parameterizations and constrain parameters with observations. It is hoped that variable resolution can provide a 10-100 times more efficient way to calibrate and evaluate high-resolution configurations of CAM. Our initial focus is on central U.S. precipitation, using a global 14km grid and a variable resolution grid with 14km resolution over the central U.S., transitioning to 110km over most of the globe. For both configurations, we will present computational performance and compare precipitation related diagnostics. |
||||

17:00-18:00 | Welcome Drinks Reception |

Thursday 23 August | ||||

09:00-10:00 | Behrens, J (Universität Hamburg) |
|||

Reviewing a roadmap for adaptive atmospheric modeling | Sem 1 | |||

In my 2006 monograph on Adaptive Atmospheric Modeling the last section titled "Roadmap for the next five years" outlines several aspects of numerical methods for multi-scale phenomena in the atmosphere. Now, more than five years later, it is time to review a few of the issues raised in 2006. As one would expect, the grand solution to adaptive multi-scale modeling has not been found so far, and some solutions have generated new questions. But a lot has been achieved since 2006 and it is interesting to see, which directions have been taken. I will cover the aspects of consistent numerical methods, refinement criteria and strategies, applications and efficiency of adaptive methods in atmosphere and ocean applications. |
||||

10:00-10:30 | Lucas, D (University of Bristol) |
|||

A highly adaptive three dimensional hybrid vortex method for inviscid flows | Sem 1 | |||

Motivated by outstanding problems surrounding vortex stretching, a new numerical method to solve the inviscid Euler equations for a three-dimensional, incompressible fluid is presented. Special emphasis on spatial adaptivity is given to resolve as broad a range of scales as possible in a completely self-similar fashion. We present a hybrid vortex method whereby we discretise the vorticity in Lagrangian filaments and perform an inversion to compute velocity on an adapted finite-volume grid. This allows for a two-fold adaptivity strategy. First, although naturally spatially adaptive by definition, the vorticity filaments undergo `renoding'. We redistribute nodes along the filament to concentrate their density in regions of high curvature. Secondly the Eulerian mesh is adapted to follow high strain by increasing resolution based on local filament dimensions. These features allow vortex stretching and folding to be resolved in a completely automatic and self-similar way. The method is validated via well known vortex rings and newly discovered helical vortex equilibria are also used to test the method. |
||||

10:30-11:00 | Morning Coffee | |||

11:00-11:30 | Piccolo, C (Met Office) |
|||

Adaptive mesh method in the Met Office variational data assimilation system | Sem 1 | |||

A frequent problem in forecasting fog or icy roads in a numerical weather prediction system is attributed to the misinterpretation of the boundary layer structure in the assimilation procedure. Case studies showed that much of the misinterpretation of temperature inversions and stratocumulus layers in the assimilation is due to inappropriate background error covariances. This paper looks at the application of adaptive mesh methods in the Met Office variational assimilation system to modify the background error correlations in the boundary layer when temperature inversions or stratocumulus layers are present in the background state. |
||||

11:30-12:30 | Beck, T (Karlsruhe Institute of Technology (KIT)) |
|||

Adaptive Numerical Simulation of Idealized Cyclones | Sem 1 | |||

The processes causing a cyclone's formation, its intensification, its motion and finally its terminating all proceed at multiple and interacting temporal and spatial scales. Therefore the forecasting of a cyclone's dynamic can make great benefits of adaptive techniques such as local mesh refinement. Mesh adaptation strategies are often based on problem-dependent indicators that need to be determined and which have different properties with respect to the associated computational effort and the quality of the resulting meshes. For example, the computation of goal-oriented error indicators requires sensitivity information provided by the solution of an additional linear problem which leads to a remarkable overhead in computation time and additional storage requirements. In that context, we address the question whether the complexity of different criteria for refinement is justifiable. To this end, we investigate idealized cyclone scenarios and systematically analyze the efficiency of adaptive numerical methods employing a selection of different indicators based on physical criteria, heuristic criteria, a posteriori error estimators, and goal-oriented approaches. We compare these approaches with regard to the related computational costs, storage requirements, implementation complexity, and the accuracy of the resulting solution. |
||||

12:30-13:30 | Lunch at Wolfson Court | |||

13:30-14:00 | Müller, A (Naval Postgraduate School) |
|||

Are adaptive simulations more accurate than uniform simulations? | Sem 1 | |||

Adaptive mesh refinement generally serves to increase computational efficiency without compromising the accuracy of the numerical solution significantly. However it is an open question in which regions the spatial resolution can actually be coarsened without affecting the accuracy of the result significantly. Another open question is the following: does an adaptive computation simulate large scale features of the flow more accurately than a uniform simulation when both use the same CPU time? These questions are investigated in the case of a 2D dry warm air bubble with the help of a recently developed adaptive discontinuous Galerkin model. A method is introduced which allows one to compare the accuracy between different choices of refinement regions even in a case when the exact solution is not known. Essentially this is done by comparing features of the solution that are strongly sensitive to spatial resolution. The additional error by using adaptivity is smaller than 1% of the total numerical error if the average number of elements used for the adaptive simulation is about 50% smaller than the number used for the simulation with the uniform fine-resolution grid. Correspondingly the adaptive simulation is almost two times faster than the uniform simulation. Furthermore the adaptive simulation is more accurate than a uniform simulation when both use the same CPU time. |
||||

14:00-15:00 | Breakout and discussion: the Newton Institute Programme, where should it lead, what will we achieve? | |||

15:00-15:30 | Afternoon Tea | |||

15:30-16:30 | Holm, D (Imperial College London) |
|||

Parameterizing interaction of disparate scales: selective decay by Casimir dissipation in fluids | Sem 1 | |||

The problem of parameterizing the interactions of disparate scales in fluid flows is addressed by considering a property of two-dimensional incompressible turbulence. The property we consider is selective decay, in which a Casimir of the ideal formulation (enstrophy in 2D flows) decays in time, while the energy stays essentially constant. This paper introduces a mechanism that produces selective decay by enforcing Casimir dissipation in fluid dynamics. This mechanism turns out to be related in certain cases to the numerical method of anticipated vorticity discussed in \cite{SaBa1981,SaBa1985}. Several examples are given and a general theory of selective decay is developed that uses the Lie-Poisson structure of the ideal theory. A scale-selection operator allows the resulting modifications of the fluid motion equations to be interpreted in several examples as parameterizing the nonlinear, dynamical interactions between disparate scales. The type of modified fluid equation systems derived here may be useful in turbulent geophysical flows where it is computationally prohibitive to rely on the slower, indirect effects of a realistic viscosity, such as in large-scale, coherent, oceanic flows interacting with much smaller eddies. |
||||

16:30-17:00 | Michoski, C (University of Texas at Austin) |
|||

Adaptive multiscale discontinuous Galerkin methods for multiphase morphodynamics | Sem 1 | |||

We present a strongly coupled, eigendecomposition problem for an extension of the Saint–Venant shallow water equations in two dimensions strongly coupled to a completely generalized Exner form of the sediment discharge equation. This formulation is used to implement an adaptive discontinuous Galerkin (DG) finite element method, using a Roe Flux for the advective components and the unified form for the dissipative components. We discuss important mathematical and numerical nuances that arise due to the emergence of nonconservative product formalisms in the presence of sharp gradients, and present some large scale candidate application models with examples |
||||

19:30-22:00 | Conference Dinner at Corpus Christi College |

Friday 24 August | ||||

09:00-10:00 | Ainsworth, M (Brown University) |
|||

A Framework for the Development of Computable Error Bounds for Finite Element Approximations | Sem 1 | |||

We present an overview of our recent work on the development of fully computable upper bounds for the discretisation error measured in the natural (energy) norm for a variety of problems including linear elasticity, convection-diffusion-reaction and Stokes flow in three space dimensions. The upper bounds are genuine upper bounds in the sense that the actual numerical value of the estimated error exceeds the actual numerical value of the true error regardless of the coarseness of the mesh or the nature of the data for the problem, and are applicable to a variety of discretisation schemes including conforming, non-conforming and discontinuous Galerkin finite element schemes. All constants appearing in the bounds are fully specified. Numerical examples show the estimators are reliable and accurate even in the case of complicated three dimensional problems, and are suitable for driving adaptive finite element solution algorithms. |
||||

10:00-10:30 | Hill, J (Imperial College London) |
|||

Adapting to life: Ocean ecosystem modelling using an unstructured and adaptive mesh ocean model | Sem 1 | |||

Primary production in the world ocean is significantly controlled by meso- and sub-mesocale process. Thus existing general circulation models applied at the basin and global scale are limited by two opposing requirements: to have high enough spatial resolution to resolve fully the processes involved (down to order 1km) and the need to realistically simulate the basin scale. No model can currently satisfy both of these constraints. Adaptive unstructured mesh techniques offer a fundamental advantage over standard fixed structured mesh models by automatically generating very high resolution at locations only where and when it is required. Mesh adaptivity automatically resolves fine-scale physical or biological features as they develop, optimising computational cost by reducing resolution where it is not required. Here, we describe Fluidity-ICOM, a non-hydrostatic, finite-element, unstructured mesh ocean model, into which we have embedded a six-component ecosystem model, that has been validated at a number of ocean locations. We show the different meshes that arise from using different metrics to create the adaptive mesh and from the underlying physical and biological processes that occur at each station. We then apply the model to a three-dimensional restratification problem and examine the effect of mesh resolution on simulated biological productivity on both fixed and adaptive meshes. |
||||

10:30-11:00 | Morning Coffee | |||

11:00-11:30 | Li, Y (Chinese Academy of Sciences) |
|||

A new approach to implement sigma coordinate in a numerical model | Sem 1 | |||

This study shows a new way to implement terrain-following σ-coordinate in a numerical model, which does not lead to the well-known “pressure gradient force (PGF)” problem. First, the causes of the PGF problem are analyzed with existing methods that are categorized into two different types based on the causes. Then, the new method that bypasses the PGF problem all together is proposed. By comparing these three methods and analyzing the expression of the scalar gradient in a curvilinear coordinate system, this study finds out that only when using the covariant scalar equations of σ-coordinate will the PGF computational form have one term in each momentum component equation, thereby avoiding the PGF problem completely. A convenient way of implementing the covariant scalar equations of σ-coordinate in a numerical atmospheric model is illustrated, which is to set corresponding parameters in the scalar equations of the Cartesian coordinate. Finally, two idealized experiments manifest that the PGF calculated with the new method is more accurate than using the classic one. Specifically, the relative error of PGF in the new method is reduced by orders of magnitude compared with the result obtained by the classic method; and the pattern of PGF in the new method is more consistent with the analytical PGF pattern than using the classic method. This new method can be used for oceanic models as well, and needs to be tested in both the atmospheric and oceanic models. |
||||

11:30-12:30 | Klöfkorn, R (Universität Stuttgart) |
|||

Discontinuous Galerkin Methods for Adaptive Atmospheric Flow | Sem 1 | |||

In this talk we present higher order discontinuous Galerkin methods for convection dominated problems (using a limiter based stabilization [3]) or diusion dominated problems (see [4]). A comparison of these methods with COSMO, a well established dynamical core for weather forecast, for standard test cases for atmospheric ow [5] is presented. This talk also highlights software techniques as well as recent development of the software package Dune [1] and the discretization module Dune-Fem [2]. In particular we comment on implemented techniques to allows for local grid adaptivity even in parallel environments. In this case dynamic load-balancing is applied to maintain scalability of the simulation code. References [1] P. Bastian, M. Blatt, A. Dedner, C. Engwer, R. Klofkorn, R. Kornhuber, M. Ohlberger, and O. Sander. A generic grid interface for parallel and adaptive scientic computing. II: Implementation and tests in Dune. Computing, 82(2-3):121{138, 2008.[2] A. Dedner, R. Klofkorn, M. Nolte, and M. Ohlberger. A Generic Interface for Parallel and Adaptive Scientic Computing: Abstraction Principles and the Dune- Fem Module. Computing, 90(3{4):165{196, 2010. [3] A. Dedner and R. Klofkorn. A Generic Stabilization Approach for Higher Order Discontinuous Galerkin Methods for Convection Dominated Problems. J. Sci. Comput., 47(3):365{388, 2011. [4] S. Brdar, A. Dedner, and R. Klofkorn. Compact and stable Discontinuous Galerkin methods for convection-diusion problems. Preprint no. 2/2010, Mathematisches Institut, Universitat Freiburg, 2010. accepted for publication in SIAM J. Sci. Comput. [5] S. Brdar, M. Baldauf, A. Dedner, and R. Klofkorn. Comparison of dynamical cores for NWP models. Theor. Comput. Fluid Dyn., 2012. |
||||

12:30-13:30 | Lunch at Wolfson Court | |||

13:30-14:30 | Budd, C (University of Bath) |
|||

Monge ampere based moving mesh methods with applications to numerical weather prediction | Sem 1 | |||

Moving mesh methods can be very effective for problems with many scales such as those which arise in numerical weather prediction and data assimilation. However traditional moving mesh methods can have problems with implementation and mesh tangling which have made them less effective than other adaptive methods for problems in meteorology. In this talk I will describe a moving mesh method based on ideas in the theory of optimal transport which derives a mesh by solving a Monge-Ampere equation. This can then be coupled to a CFD solver to provide an effective method for solving multiscale incompressible flows. I will describe the method and apply it to several meteorological problems. Joint work with mike Cullen , chiara piccolo , Emily Walsh and Phil Browne |
||||

14:30-15:00 | Bader, M (Technische Universität München) |
|||

Parallelization and Software Concepts for Tsunami Simulation on Dynamically Adaptive Triangular Grids | Sem 1 | |||

We present a memory- and cache-efficient approach for simulations on recursively refined dynamically adaptive triangular grids. Grid cells are stored and processed in an order defined by the Sierpinski curve; the resulting locality properties are exploited for optimised serial implementation and parallelisation. The approach is particularly designed for Finite Volume and discontinuous Galerkin solvers, with Tsunami simulation as the main target application. In the talk, we will discuss approaches for parallelisation in shared and distributed memory. We will present a classical partitioning-based strategy, as well as a novel shared-memory approach based on the dynamical scheduling of many small sub-partitions. Here, the intention is to allow for strongly varying computational load per element (as required for inundation modelling, or for local time-stepping methods). In addition, we would like to discuss some ideas on how to provide bathymetry data and (time-dependent) displacements for a simulation with dynamically adaptive refinement. We'll present some first results of adaptive simulations using the augmented Riemann solvers provided with GeoClaw. |
||||

15:00-15:30 | Afternoon Tea | |||

15:30-16:00 | Côté, J (UQAM - Université du Québec à Montréal) |
|||

Unified Multiscale Operational Weather Forecasting at the Canadian Meteorological Centre | Sem 1 | |||

The genesis of the unified Global Environmental Multiscale model and forecasting system that was gradually deployed since 1997 will be presented. It was originally designed for both forecasting and data assimilation at uniform resolution global scale and variable resolution continental scale. It could also be run for mesoscale forecasting over smaller areas by increasing the stretching of the grid. The formulation of the model allowed it to be run either in hydrostatic or non-hydrostatic mode. Tangent linear and adjoint models were developed for variational data assimilation, and data assimilation was performed first using 3D Var and later 4D Var. More recently a global ensemble system was developed based on the GEM model for both ensemble forecasting and data assimilation. The modelling system has been generalised in different directions to become a complete environmental prediction system: emergency response, volcanic ashes, air quality, stratospheric ozone, wave modelling, coupling to rivers and oceans etc. The development of a nested version of the model is having a profound impact on the forecasting system. After thorough testing, it was used successfully for operational forecasting at 1 km during the Vancouver Olympics in 2010 and it will become operational at 2.5 km over several windows in Canada. A large uniform resolution nested model has been shown to be equivalent to the variable resolution version for continental forecast and was implemented operationally. The nested version has allowed the development of a regional ensemble system. A replacement of the uniform resolution global grid by the composite Yin-Yang grid based on two uniform resolution limited-area grids is under study. Preliminary testing is showing this version to be equivalent in accuracy, but free of the “pole problem” affecting the grid point latitude-longitude models and much more suitable for future supercomputer architectures. |
||||

16:00-16:30 | Cotter, C (Imperial College London) |
|||

Finite element exterior calculus framework for geophysical fluid dynamics | Sem 1 | |||

Finite element exterior calculus provides an extension of the mimetic differencing approach that underlies the C-grid schemes used in the ICON and MPAS projects. This talk will lay out an extension of this approach, in particular of the approach of Ringler, Thuburn, Skamarock and Klemp (2010), to a finite element framework. The required properties of (a) stationary geostrophic linear modes on the f-plane, (b) local conservation of mass, and (c) conservative, consistent advection of potential vorticity are retaining in this framework, whilst allowing higher-order accuracy and increased flexibility which can be used to alter the balance of vorticity and mass degrees of freedom to minimise the potential for spurious modes. |