Skip to content

Workshop Programme

for period 23 - 27 August 2010

Stochastic Methods in Climate Modelling

23 - 27 August 2010

Timetable

Monday 23 August
Session: Grand challenges and introduction to the week
09:00-09:50 Registration
09:50-10:00 Welcome by Dr Ben Mestel (INI Deputy Director)
10:00-11:00 Palmer, T (ECMWF)
  A very grand challenge for the science of climate prediction Sem 1
 

A rather prevelant picture of the development of climate models throughout the 20th Century, is for the idealised, simplified, and hence mathematically tractable models of climate to be the focus of mathematicians, leaving to engineers, the "brute force" approach of developing ab initio Earth System Models. I think we should leave this paradigm in the 20th Century, where it belongs: for one thing, the threat of climate change is too important and the problems of predicting climate reliably too great. For the 21st Century, I propose that mathematicians need to engage on innovative methods to represent the unresolved and poorly resolved scales in ab initio models, based on nonlinear stochastic-dynamic methods. The reasons are (at least) threefold. Firstly, climate model biases are still substantial, and may well be systemically related to the use of deterministic bulk-formula closure - this is an area where a much better basic understanding is needed. Secondly, deterministically formulated climate models are incapable of predicting the uncertainty in their predictions; and yet this is a crucially important prognostic variable for societal applications. Stochastic-dynamic closures can in principle provide this. Finally, the need to maintain worldwide a pool of quasi-independent deterministic models purely in order to have an ad hoc multi-model estimate of uncertainty, does not make efficient use of the limited human and computer resources available worldwide for climate model developement. The development of skilful stochastic-dynamic closures will undermine the need for such inefficient use of human resources. As such, a very grand challenge for the science of climate prediction is presented in the form of a plea for the engagement of mathematicians in the development of a prototype Probabilistic Earth-System Model. It is hoped that this Newton Institute Programme will be seen as pivotal for such development.

 
11:00-11:30 Morning Coffee
11:30-12:30 Jones, C (University of Warwick)
  Is data assimilation relevant to climate research? Sem 1
 

Data assimilation (DA) has not been used in climate studies anything like to the extent it has in weather prediction. I will discuss whether this is likely to change and argue that DA has a lot to offer climate research, particularly when cast in a Bayesian framework. Motivating examples from paleoclimate and ocean studies will be given that will serve to outline the major challenges arising in DA when it is used to tackle climate prpoblems.

 
12:30-13:30 Lunch at Wolfson Court
14:00-15:00 Lenton, T (University of East Anglia)
  Indentification and early warning of climate tipping points Sem 1
 

Striking developments in the climate system in recent years have reinforced the view that anthropogenic radiative forcing is unlikely to cause a smooth transition into the future. Drought in the Amazon in 2005, record Arctic sea-ice decline in 2007, accelerating loss of water from the Greenland and West Antarctic ice sheets, and an extraordinary Asian summer monsoon in 2010, have all made the headlines. These large-scale components of the Earth system are among those that we have identified as potential ‘tipping elements’ – climate sub-systems that could exhibit a ‘tipping point’ where a small change in forcing causes a qualitative change in their future state. The resulting transition may be either abrupt or irreversible, or in the worst cases, both. In IPCC terms such changes are referred to as “large-scale discontinuities”. Should they occur, they would surely qualify as dangerous climate changes. Recent assessments suggest that the traditional view of the likelihood of tipping points as very low probability events should be revised upwards - especially if we continue business-as-usual. Given this, is there any prospect for providing societies with a useful early warning signal of an approaching climate tipping point? The talk will have two main aims. Firstly, we want to review (and slightly revise) the list of potential tipping elements, providing some updates, especially where there is new insight into the mechanisms behind them, or new information about the proximity of tipping points. Secondly, we want to present our ongoing work to try and develop robust methods of identifying and anticipating tipping points (in particular, bifurcations) in the climate system. Our latest application of these methods to sea surface temperature data suggest that a new climate state may be in the process of appearing, particularly in the Arctic and northernmost Atlantic region.

 
15:00-15:30 Afternoon Tea
15:30-16:30 Dewar, R (The Australian National University)
  Maximum entropy production and climate modelling: an overview of theory and applications Sem 1
 

Since the work of Onsager in the 1930s, Maximum Entropy Production (MaxEP) has been proposed in various guises as a thermodynamic selection principle governing the macroscopic behaviour of non-equilibrium systems. While some encouragingly realistic predictions have been obtained from MaxEP in a diverse range of non-equilibrium systems across physics, chemistry and biology – including climate systems – two outstanding questions have hindered its wider adoption as a mainstream predictive tool: What is the theoretical basis for MaxEP? And what is the appropriate entropy production to be maximised in any given problem? In this introductory talk I will summarise recent progress towards answering these questions, and outline some implications for the practical role of MaxEP in climate modelling.

 
16:45-17:45 Shutts, G (Met Office)
  Current use of stochastic methods in operational NWP/climate forecasting: are they physically justifiable Sem 1
 

The physical basis for current methods of stochastic parametrization in NWP/climate models is reviewed and their plausibility assessed with respect to unresolved or near-gridscale meteorological phenomena. This issue is closely related to that of the predictability of convective scale and mesoscale weather systems. The coarse-graining strategy is described and applied to high-resolution NWP model forecast output and cloud-resolving model simulations of deep, tropical convection. The results are used to provide some constraints on the stochastic backscatter and the perturbed physical tendency approaches.

 
17:45-18:15 Welcome Wine Reception
18:15-19:00 Dinner at Churchill College
Tuesday 24 August
Session: Earth's climate as a dynamical system
10:00-11:00 Crucifix, M (Université Catholique de Louvain)
  Stochastic methods for understanding palaeoclimates Sem 1
 

We review the fundamental basis of palaeoclimate theory : astronomical control on insolation, climate models as (stochastic) dynamical systems and statistical frameworks for model selection and model calibration, accounting for the specificities of the palaeoclimate problem : sparse data, dating uncertainties and phenomenological character. In the spirit of the workshop, we emphasise the stochastic aspects of the theory. Stochastic methods intervene in model design, in order to parameterise climatic events at shorter time scales than the dynamics deterministically represented in the model. As stochastic parameterisations are introduced, the notions of synchronisation and climatic attractor have to be revisited, but modern mathematics provide the tools to this end (pullback and random attractors). In a specific example, we show how the synchronisation patterns on astronomical forcing evolve as the complexity of the astronomical forcing is gradually taken into account, and then when stochastic parameterisations are introduced. Stochastic methods naturally occur in statistical problems of model calibration and selection, via Monte-Carlo Sampling methods. We give an overview of what has been attempted so far, including particle filter for state and parameter estimation methods, although we still are in uncharted territory. Finally, we conclude on more philosophical attempts at understanding the meaning of stochastic parameterisations ('sub-grid parameterisations' or 'model error').

 
11:00-11:30 Morning Coffee
11:30-12:30 Franzke, C (University of Cambridge)
  Systematic Strategies for Stochastic Climate Modeling Sem 1
 

The climate system has a wide range of temporal and spatial scales for important physical processes. Examples include convective activity with an hourly time scale, organized synoptic scale weather systems on a daily time scale, extra-tropical low-frequency variability on a time scale of 10 days to months, to decadal time scales of the coupled atmosphere-ocean system. An understanding of the processes acting on different spatial and temporal scales is important since all these processes interact with each other due to the nonlinearities in the governing equations. Most of the current problems in understanding and predicting the climate system stem from the multi-scale nature of the climate system in that all of the above processes interact with each other and the neglect and/or misrepresentation of some of the processes lead to systematic biases of the resolved processes and uncertainties in the climate response. A better understanding of the multi-scale nature of the climate system will be crucial in making more accurate and reliable weather and climate predictions. In my presentation I will discuss systematic strategies to derive stochastic models for climate prediction. The stochastic mode reduction strategy accounts systematically for the effect of the unresolved degrees of freedom and predicts the functional form of the effective reduced equations. These procedures extend beyond simple Langevin equations with additive noise by predicting nonlinear effective equations with both additive and multiplicative (state-dependent) noises. The stochastic mode reduction strategy predicts rigorously closed form stochastic models for the slow variables in the limit of infinite separation of time-scales.

 
12:30-13:30 Lunch at Wolfson Court
14:00-15:00 Branstator, G (National Center for Atmospheric Research)
  Properties of the atmospheric response to tropical heating estimated from the fluctuation dissipation theorem Sem 1
 

Recent studies have demonstrated the applicability of the Fluctuation Dissipation Theorem to atmospheric response problems in which the external stimulus is a function of space but is constant in time. These investigations have made clear the utility of the resulting response operators for addressing questions concerning optimal response, climate control, attribution and physical mechanisms. In this presentation we explore the usefulness of the FDT methodology for response problems in which the imposed forcing is a function of time. In our study we concentrate on the effects of time varying tropical heating. First we valid operators designed to match the solutions of AGCMs. Next we use the operators to systematically explore how the tropical and midlatitude response depends on attributes of the tropical heating including its position, structure and movement. Not only are operators for the response of mean state variables considered but also operators that give the response of functionals of the state, including eddy variance and fluxes associated with the storm tracks.

 
15:00-15:30 Afternoon Tea
15:30-16:30 Kleeman, R (Courant Institute)
  The spectra of a general class of stochastic climate models Sem 1
 

The simplest class of stochastic models relevant to geophysical applications consist of a linearization of the dynamical system and the addition of constant multivariate stochastic forcing. Such stochastic systems are known as finite dimensional Ornstein Uhlenbeck systems and have wide application. In this talk we describe a general decomposition of the equilibrium spectrum of such processes. This is of interest in applications since spectra of long time series are commonly robustly defined from observations. We apply this formalism to the case of ENSO where it is often argued that there is a dominant normal mode. Here we argue that the decadal part of the ENSO spectrum can be simply explained by the stimulation of the cross spectrum of the dominant normal mode. The cross spectrum is dependent on the ENSO cycle phase meaning that this mechanism implies that the different ENSO phases have different spectral strengths at decadal frequencies.

 
16:45-17:45 Discussion session - What are the key dynamical systems questions to address in the programme
18:15-19:00 Dinner at Churchill College
Wednesday 25 August
Session: Tipping points
10:00-11:00 Thompson, M; Sieber, J (Cambridge and Portsmouth Universities)
  Climate tipping as a noisy bifurcation: a predictive technique Sem 1
 

In the first half of this contribution (speaker JMTT) we review the bifurcations of dissipative dynamical systems. The co-dimension-one bifurcations, namely those which can be typically encountered under slowly evolving controls, can be classified as safe, explosive or dangerous. Focusing on the dangerous events, which could underlie climate tippings, we examine the precursors (in particular the slowing of transients) and the outcomes which can be indeterminate due to fractal basin boundaries. It is often known, from modelling studies, that a certain mode of climate tipping is governed by an underlying bifurcation. For the case of a so-called fold, a commonly encountered bifurcation (of the oceanic thermohaline circulation, for example), we estimate (speaker JS) how likely it is that the system escapes from its currently stable state due to noise before the tipping point is reached. Our analysis is based on simple normal forms, which makes it potentially useful whenever this type of tipping is identified (or suspected) in either climate models or measurements. Drawing on this, we suggest a scheme of analysis that determines the best stochastic fit to the existing data. This provides the evolution rate of the effective control parameter, the (parabolic) variation of the stability coefficient, the path itself and its tipping point. By assessing the actual effective level of noise in the available time series, we are then able to make probability estimates of the time of tipping. In this vein, we examine, first, the output of a computer simulation for the end of greenhouse Earth about 34 million years ago when the climate tipped from a tropical state into an icehouse state with ice caps. Second, we use the algorithms to give probabilistic tipping estimates for the end of the most recent glaciation of the Earth using actual archaeological ice-core data.

 
11:00-11:30 Morning Coffee
11:30-12:30 Wieczorek, S (University of Exeter)
  Rate-dependent tipping points: the example of the compost-bomb instability Sem 1
 

This paper discusses rate-dependent tipping points related to a novel excitability type where a (globally) stable equilibrium exists for all different fixed settings of a system's parameter but catastrophic excitable bursts appear when the parameter is increased slowly, or ramped, from one setting to another. Such excitable systems form a singularly perturbed problem with at least two slow variables, and we focus on the case with locally folded critical manifold. Our analysis based on desingularisation relates the rate-dependent tipping point to a canard trajectory through a folded saddle and gives the general equation for the critical rate of ramping. The general analysis is motivated by a need to understand the response of peatlands to global warming. It is estimated that peatland soils contain 400 to 1000 billion tonnes of carbon, which is of the same order of magnitude of the carbon content of the atmosphere. Recent work suggests that biochemical heat release could destabilize peatland above some critical rate of global warming, leading to a catastrophic release of soil carbon into the atmosphere termed the ``compost bomb instability''. This instability is identified as a rate-dependent tipping point in the response of the climate system to anthropogenic forcing (atmospheric temperature ramping).

 
12:30-13:30 Lunch at Wolfson Court
Session: Use of ensembles
14:00-15:00 Semenov, M (Rothamsted Research)
  Delivering local-scale climate scenarios for impact assessments Sem 1
 

Process-based models, used in assessment of impact of climate change, require daily weather as one of their main inputs. The direct use of climate predictions from global or regional climate models could be problematic, because the coarse spatial resolution and large uncertainty in their output at a daily scale, particularly for precipitation. Output from a climate model requires application of various downscaling techniques, such as weather generator (WG). WG is a model which, after calibration of site parameters with observed weather, is capable of simulating synthetic daily weather that are statistically similar to observed. By altering the site parameters using changes in climate predicted from climate models, it is possible to generate daily weather for the future. A dataset, ELPIS, of local-scale daily climate scenarios for Europe has been developed. This dataset is based on 25 km grids of interpolated daily precipitation, minimum and maximum temperatures and radiation from the European Crop Growth Monitoring System (CGMS) meteorological dataset and climate predictions from the multi-model ensemble of 15 global climate models that were used in the IPCC 4th Assessment Report. The site parameters for the distributions of climatic variables have been estimated by the LARS-WG weather generator for nearly 12 000 grids in Europe for the period 1982–2008. The ability of LARS-WG to reproduce observed weather was assessed using statistical tests. This dataset was designed for use in conjunction with process-based impact models (e.g. crop simulation models) for the assessment of climate change impacts in Europe. A climate scenario generated by LARS-WG for a grid represents daily weather at a typical site from this grid that is used for agricultural production. This makes it different from the recently developed 25 km gridded dataset for Europe (E-OBS), which gives the best estimate of grid box averages to enable direct comparison with regional climate models.

 
15:00-15:30 Afternoon Tea
15:30-16:30 Kunsch, H (ETH Zürich)
  Biases and uncertainty in multi-model climate projections Sem 1
 

The ensemble approach has originally been derived in probabilistic medium-range weather forecasting, and is now broadly used in numerical weather prediction, seasonal forecasting and climate research on a wide range of time scales. Applications geared towards climate projections are usually based on a heterogeneous ensemble with typically a mere handful of ensemble members, stemming from different models in an only partly coordinated framework. An important feature of ensemble approaches in climate research is the inability to rigorously quantify climate model biases. While biases of climate models are monitored for the control period, the lack of long-term comprehensive observations (on the centennial time-scales considered) implies that it is difficult to decide how the model biases will change with the climate state. In contrast to other studies, we look not only at 20 or 30 year averages, but also at the interannual variability. This allows us to consider additive and multiplicative biases. In the talk, I will discuss two plausible assumptions about the extrapolation of additive biases, referred to as the ``constant bias'' and ``constant relation'' assumptions. The former is used implicitly in most studies of climate change. The latter asserts that over-/underestimation of the interannual variability in the control period leads also to over-/underestimation of climate change, and this assumption is closely related to the statistical post-processing of seasonal climate predictions. In addition we explicitly allow the additive and multiplicative model biases to change between control and scenario periods, resolving the resulting lack of identifiability by the use of informative priors. An analysis of of GCM/RCM simulations from the ENSEMBLES project shows that bias assumptions critically affect the results for several regions and seasons.

 
16:45-17:45 Cox, P (University of Exeter)
  Model resolution versus ensemble size: optimizing the trade-off for finite computing resources Sem 1
19:30-22:00 Conference Dinner at Jesus College
Thursday 26 August
Session: Maximum Entropy Production (MEP)
10:00-11:00 Kleidon, A (Max-Planck-Institut)
  Life, hierarchy, and the thermodynamic machinery of planet Earth Sem 1
 

Throughout Earth’s history, life has increased greatly in abundance, complexity, and diversity. At the same time, it has substantially altered the Earth’s environment, evolving some of its variables to states further and further away from thermodynamic equilibrium. For instance, concentrations in atmospheric oxygen have increased throughout Earth's history, resulting in an increased chemical disequilibrium in the atmosphere as well as an increased redox gradient between the atmosphere and the Earth's reducing crust. These trends seem to contradict the second law of thermodynamics, which states for isolated systems that gradients and free energy are dissipated over time, resulting in a state of thermodynamic equilibrium. This seeming contradiction is resolved by considering planet Earth as a coupled, hierarchical and evolving non-equilibrium thermodynamic system that has been substantially altered by the input of free energy generated by photosynthetic life. Here, I present this hierarchical thermodynamic theory of the Earth system. I first present simple considerations to show that thermodynamic variables are driven away from a state of thermodynamic equilibrium by the transfer of power from some other process and that the resulting state of disequilibrium reflects the past net work done on the variable. This is applied to the processes of planet Earth to characterize the generation and transfer of free energy and its dissipation, from radiative gradients to temperature and chemical potential gradients that result in chemical, kinetic, and potential free energy and associated dynamics of the climate system and geochemical cycles. The maximization of power transfer among the processes within this hierarchy is closely related to the proposed principle of Maximum Entropy Production (MEP). The role of life is then discussed as a photochemical process that generates substantial amounts of additional free energy which essentially skips the limitations and inefficiencies associated with the trans fer of power within the thermodynamic hierarchy of the planet. In summary, this perspective allows us to view life as being the means to transform many aspects of planet Earth to states even further away from thermodynamic equilibrium than is possible by purely abiotic means. In this perspective pockets of low-entropy life emerge from the overall trend of the Earth system to increase the entropy of the universe at the fastest possible rate. The implications of the theory presented here are discussed regarding fundamental deficiencies in Earth system modeling, applications of the theory to reconstructions of Earth system history, to evaluate human impacts and regarding the limits of renewable sources of free energy for future human energy demands.

 
11:00-11:30 Morning Coffee
11:30-12:30 Jupp, T (University of Exeter)
  MEP and planetary climates: insights from a two-box climate model containing atmospheric dynamics Sem 1
12:30-13:30 Lunch at Wolfson Court
14:00-15:00 Gregory, J (University of Reading)
  Climate entropy production based on AOGCM diagnostics Sem 1
 

Most investigations of the MEP hypothesis have used climate models which do not explicitly simulate the physics and dynamics of the climate system (such as Paltridge's model), or in which they are radically simplified (such as a dry GCM). We have instead concentrated on entropy analysis of the HadCM3 atmosphere-ocean general circulation model (AOGCM), the kind of model used for prediction of 21st-century global climate change. In the AOGCM, we diagnose the entropy sources and sinks directly from the diabatic heating terms. The rate of material entropy production of the climate system (i.e. not including thermal equilibration of radiation) is about 50 mW m-2 K-1. The largest part of the material EP (about 38 mW m-2 K-1), is due to sensible and latent heat transport. When we vary parameters in the physical formulation of the AOGCM, MEP might suggest that the most realistic version is the one with the largest EP. However, in the AOGCM there is no maximum in EP, for two reasons. First, the strongest influence on EP is the throughput of energy from the net shortwave absorption, which is very sensitive to model parametrisation, rather than the anticorrelation of heat flux and temperature gradient seen in simple models when net shortwave absorption is fixed. This dependence comes particularly from the dominance of EP by the hydrological cycle, which intensifies monotonically with the global average temperature. Second, the EP predominantly comes from vertical heat transport, and to achieve a maximum with fixed shortwave heating implies an unrealistic vertical temperature gradient and/or unphysical longwave emissivity. There is, however, a maximum in KE dissipation in the atmosphere, similar to Lorenz's (1960) conjecture, associated with a smaller part of the material EP (about 13 mW m-2 K-1).

 
15:00-15:30 Afternoon Tea
15:30-17:00 Discussion session - What are the key MEP questions to address in this programme
17:00-18:00 Poster Session
18:15-19:00 Dinner at Churchill College
Friday 27 August
Session: Stochastic climate models
10:00-11:00 Kwasniok, F (Univeristy of Exeter)
  Empirical stochastic modelling in weather and climate science: applications from subgrid-scale parametrisation to analysis & modelling of palaeoclimatic records Sem 1
 

The dynamics of weather and climate encompass a wide range of spatial and temporal scales which are coupled through the nonlinear nature of the governing equations of motion. A stochastic climate model resolves only a limited number of large-scale, low-frequency modes; the effect of unresolved scales and processes onto the resolved modes is accounted for by stochastic terms. Here, such low-order stochastic models are derived empirically from time series of the system using statistical parameter estimation techniques.

The first part of the talk deals with subgrid-scale parametrisation in atmospheric models. By combining a clustering algorithm with local regression fitting a stochastic closure model is obtained which is conditional on the state of the resolved variables. The method is illustrated on the Lorenz '96 system and then applied to a model of atmospheric low-frequency variability based on empirical orthogonal functions.

The second part of the talk is concerned with deriving simple dynamical models of glacial millennial-scale climate variability from ice-core records. Firstly, stochastically driven motion in a potential is adopted. The shape of the potential and the noise level are estimated from ice-core data using a nonlinear Kalman filter. Secondly, a mixture of linear stochastic processes conditional on the state of the system is used to model ice-core time series.

 
11:00-11:30 Morning Coffee
11:30-12:30 Steinheimer, M (ECMWF)
  Stochstic representation of model uncertainties in ECMWF's forecasting system Sem 1
 

The Integrated Forecasting System (IFS) is a sophisticated software system for weather forecasting, which was jointly developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) and Meteo France. All applications needed for generating operational weather forecasts are included, such as data assimilation, atmospheric model and post processing. The IFS is used for deterministic 10 day forecasts and ensemble forecasts with forecast ranges from 15 days for the medium range EPS, 32 days for the monthly forecast up to 13 month for the seasonal forecasts. In addition to a good deterministic forecast model as basis of the ensemble prediction system, the ingredients needed to produce good ensemble forecasts are realistic and appropriate representations of the initial and model uncertainties. The stochastic schemes used for the model error representation will be presented. These are the Spectral Stochastic Backscatter Scheme (SPBS) and the Stochastically Perturbed Parametrization Tendency Scheme (SPPT). The basis of both schemes is a random spectral pattern generator, in which the spectral coefficients are evolved with a first order auto-regressive process. The resulting pattern varies smoothly in space and time with easy to control spatial and temporal correlation. The two schemes address different aspects of model error. SPPT addresses uncertainty in existing parametrization schemes, as for example parameter settings, and therefore generalizes the output of existing parametrizations as probability distributions. SPBS on the other hand describes upscale energy transfer related to spurious numerical dissipation as well as the upscale energy transfer from unbalanced motions associated with convection and gravity waves, process missing in conventional parametrization schemes. Cellular Automata (CA) are an alternative way for generating random patterns with temporal and spatial correlations. A pattern generator based on a probabilistic CA was implemented in the IFS. The implementation allows the interaction of model fields with the CA, i.e. the characteristics of the CA are influenced by the atmospheric state. The impact of the stochastic schemes on the forecast skill will be presented for different forecast ranges.

 
12:30-13:30 Lunch at Wolfson Court
14:00-15:00 Berner, J (National Center for Atmospheric Research)
  Model uncertainty in weather and climate models: Stochastic and multi-physics representations Sem 1
 

A multi-physics and a stochastic kinetic-energy backscatter scheme are employed to represent model uncertainty in a mesoscale ensemble prediction system using the Weather Research and Forecasting model. Both model-error schemes lead to significant improvements over the control ensemble system that is simply a downscaled global ensemble forecast with the same physics for each ensemble member. The improvements are evident in verification against both observations and analyses, but different in some details. Overall the stochastic kinetic-energy backscatter scheme outperforms the multi-physics scheme, except near the surface. Best results are obtained when both schemes are used simultaneously, indicating that the model error can best be captured by a combination of multiple schemes.

 
15:00-15:30 Afternoon Tea
15:30-16:30 Williams, P (University of Reading)
  The impacts of stochastic noise on climate models Sem 1
 

Our understanding of the climate system has been revolutionized by the development of sophisticated computer models. Yet, these models are not perfect representations of reality, because they remove from explicit consideration many physical processes which are known to be key aspects of the climate system, but which are too small or fast to be modelled. Examples of such processes include gravity waves. This talk will give several examples of implementations and impacts of stochastic sub-grid representations in atmosphere and ocean models.

 
16:45-17:45 Plant, R (Univeristy of Reading)
  Issues with convection. What is a useful framework beyond bulk models of large N, non-interacting, scale-separated, equilibrium systems Sem 1
 

The representation of cumulus clouds presents some notoriously stubborn problems in climate modelling. The starting point for our representations is the well-known Arakawa and Schubert (1974) system which describes interactions of cloud types ("plumes") with their environment. In some ways, this system has become brutally simplified: in applications, generally only a single "bulk" cloud type is considered, there are assumed to be very many clouds present, and an equilibrium between convection and forcing is assumed to be rapidly reached. In other ways, the system has become greatly complicated: the description of a plume is much more "sophisticated". In this talk, I want to consider what might be learnt from almost the opposite perspective: i.e., keep the plume description brutally simple, but take seriously the implications of issues like finite cloud number (leading naturally to important stochastic effects), competitive communities of cloud types (leading to a proposed relation for the co-existence of shallow and deep convection) and prognostic effects (leading to questions about how far equilibrium thinking holds).

 
18:15-19:00 Dinner at Churchill College

Back to top ∧