skip to content
 

Seminars (CLP)

Videos and presentation materials from other INI events are also available.

Search seminar archive

Event When Speaker Title Presentation Material
CLP 13th August 2010
14:00 to 15:00
Coherent structures in high-resolution ocean model simulations and implications for climate prediction
CLP 18th August 2010
14:00 to 15:00
Land-ocean surface temperature contrast and radiative forcing
CLP 19th August 2010
14:00 to 15:00
J Haslett Studying uncertainty in palaeo-climate reconstructions: What can we infer from pollen about climate dynamics in the younger dryas?
CLP 20th August 2010
10:00 to 11:00
J Hargreaves & J Annan Understanding and interpreting climate model ensembles
CLPW01 23rd August 2010
10:00 to 11:00
A very grand challenge for the science of climate prediction
A rather prevelant picture of the development of climate models throughout the 20th Century, is for the idealised, simplified, and hence mathematically tractable models of climate to be the focus of mathematicians, leaving to engineers, the "brute force" approach of developing ab initio Earth System Models. I think we should leave this paradigm in the 20th Century, where it belongs: for one thing, the threat of climate change is too important and the problems of predicting climate reliably too great. For the 21st Century, I propose that mathematicians need to engage on innovative methods to represent the unresolved and poorly resolved scales in ab initio models, based on nonlinear stochastic-dynamic methods. The reasons are (at least) threefold. Firstly, climate model biases are still substantial, and may well be systemically related to the use of deterministic bulk-formula closure - this is an area where a much better basic understanding is needed. Secondly, deterministically formulated climate models are incapable of predicting the uncertainty in their predictions; and yet this is a crucially important prognostic variable for societal applications. Stochastic-dynamic closures can in principle provide this. Finally, the need to maintain worldwide a pool of quasi-independent deterministic models purely in order to have an ad hoc multi-model estimate of uncertainty, does not make efficient use of the limited human and computer resources available worldwide for climate model developement. The development of skilful stochastic-dynamic closures will undermine the need for such inefficient use of human resources. As such, a very grand challenge for the science of climate prediction is presented in the form of a plea for the engagement of mathematicians in the development of a prototype Probabilistic Earth-System Model. It is hoped that this Newton Institute Programme will be seen as pivotal for such development.
CLPW01 23rd August 2010
11:30 to 12:30
Is data assimilation relevant to climate research?
Data assimilation (DA) has not been used in climate studies anything like to the extent it has in weather prediction. I will discuss whether this is likely to change and argue that DA has a lot to offer climate research, particularly when cast in a Bayesian framework. Motivating examples from paleoclimate and ocean studies will be given that will serve to outline the major challenges arising in DA when it is used to tackle climate prpoblems.
CLPW01 23rd August 2010
14:00 to 15:00
Indentification and early warning of climate tipping points
Striking developments in the climate system in recent years have reinforced the view that anthropogenic radiative forcing is unlikely to cause a smooth transition into the future. Drought in the Amazon in 2005, record Arctic sea-ice decline in 2007, accelerating loss of water from the Greenland and West Antarctic ice sheets, and an extraordinary Asian summer monsoon in 2010, have all made the headlines. These large-scale components of the Earth system are among those that we have identified as potential ‘tipping elements’ – climate sub-systems that could exhibit a ‘tipping point’ where a small change in forcing causes a qualitative change in their future state. The resulting transition may be either abrupt or irreversible, or in the worst cases, both. In IPCC terms such changes are referred to as “large-scale discontinuities”. Should they occur, they would surely qualify as dangerous climate changes. Recent assessments suggest that the traditional view of the likelihood of tipping points as very low probability events should be revised upwards - especially if we continue business-as-usual. Given this, is there any prospect for providing societies with a useful early warning signal of an approaching climate tipping point? The talk will have two main aims. Firstly, we want to review (and slightly revise) the list of potential tipping elements, providing some updates, especially where there is new insight into the mechanisms behind them, or new information about the proximity of tipping points. Secondly, we want to present our ongoing work to try and develop robust methods of identifying and anticipating tipping points (in particular, bifurcations) in the climate system. Our latest application of these methods to sea surface temperature data suggest that a new climate state may be in the process of appearing, particularly in the Arctic and northernmost Atlantic region.
CLPW01 23rd August 2010
15:30 to 16:30
Maximum entropy production and climate modelling: an overview of theory and applications
Since the work of Onsager in the 1930s, Maximum Entropy Production (MaxEP) has been proposed in various guises as a thermodynamic selection principle governing the macroscopic behaviour of non-equilibrium systems. While some encouragingly realistic predictions have been obtained from MaxEP in a diverse range of non-equilibrium systems across physics, chemistry and biology – including climate systems – two outstanding questions have hindered its wider adoption as a mainstream predictive tool: What is the theoretical basis for MaxEP? And what is the appropriate entropy production to be maximised in any given problem? In this introductory talk I will summarise recent progress towards answering these questions, and outline some implications for the practical role of MaxEP in climate modelling.
CLPW01 23rd August 2010
16:45 to 17:45
G Shutts Current use of stochastic methods in operational NWP/climate forecasting: are they physically justifiable
The physical basis for current methods of stochastic parametrization in NWP/climate models is reviewed and their plausibility assessed with respect to unresolved or near-gridscale meteorological phenomena. This issue is closely related to that of the predictability of convective scale and mesoscale weather systems. The coarse-graining strategy is described and applied to high-resolution NWP model forecast output and cloud-resolving model simulations of deep, tropical convection. The results are used to provide some constraints on the stochastic backscatter and the perturbed physical tendency approaches.
CLPW01 24th August 2010
10:00 to 11:00
Stochastic methods for understanding palaeoclimates
We review the fundamental basis of palaeoclimate theory : astronomical control on insolation, climate models as (stochastic) dynamical systems and statistical frameworks for model selection and model calibration, accounting for the specificities of the palaeoclimate problem : sparse data, dating uncertainties and phenomenological character. In the spirit of the workshop, we emphasise the stochastic aspects of the theory. Stochastic methods intervene in model design, in order to parameterise climatic events at shorter time scales than the dynamics deterministically represented in the model. As stochastic parameterisations are introduced, the notions of synchronisation and climatic attractor have to be revisited, but modern mathematics provide the tools to this end (pullback and random attractors). In a specific example, we show how the synchronisation patterns on astronomical forcing evolve as the complexity of the astronomical forcing is gradually taken into account, and then when stochastic parameterisations are introduced. Stochastic methods naturally occur in statistical problems of model calibration and selection, via Monte-Carlo Sampling methods. We give an overview of what has been attempted so far, including particle filter for state and parameter estimation methods, although we still are in uncharted territory. Finally, we conclude on more philosophical attempts at understanding the meaning of stochastic parameterisations ('sub-grid parameterisations' or 'model error').
CLPW01 24th August 2010
11:30 to 12:30
C Franzke Systematic Strategies for Stochastic Climate Modeling
The climate system has a wide range of temporal and spatial scales for important physical processes. Examples include convective activity with an hourly time scale, organized synoptic scale weather systems on a daily time scale, extra-tropical low-frequency variability on a time scale of 10 days to months, to decadal time scales of the coupled atmosphere-ocean system. An understanding of the processes acting on different spatial and temporal scales is important since all these processes interact with each other due to the nonlinearities in the governing equations. Most of the current problems in understanding and predicting the climate system stem from the multi-scale nature of the climate system in that all of the above processes interact with each other and the neglect and/or misrepresentation of some of the processes lead to systematic biases of the resolved processes and uncertainties in the climate response. A better understanding of the multi-scale nature of the climate system will be crucial in making more accurate and reliable weather and climate predictions. In my presentation I will discuss systematic strategies to derive stochastic models for climate prediction. The stochastic mode reduction strategy accounts systematically for the effect of the unresolved degrees of freedom and predicts the functional form of the effective reduced equations. These procedures extend beyond simple Langevin equations with additive noise by predicting nonlinear effective equations with both additive and multiplicative (state-dependent) noises. The stochastic mode reduction strategy predicts rigorously closed form stochastic models for the slow variables in the limit of infinite separation of time-scales.
CLPW01 24th August 2010
14:00 to 15:00
Properties of the atmospheric response to tropical heating estimated from the fluctuation dissipation theorem
Recent studies have demonstrated the applicability of the Fluctuation Dissipation Theorem to atmospheric response problems in which the external stimulus is a function of space but is constant in time. These investigations have made clear the utility of the resulting response operators for addressing questions concerning optimal response, climate control, attribution and physical mechanisms. In this presentation we explore the usefulness of the FDT methodology for response problems in which the imposed forcing is a function of time. In our study we concentrate on the effects of time varying tropical heating. First we valid operators designed to match the solutions of AGCMs. Next we use the operators to systematically explore how the tropical and midlatitude response depends on attributes of the tropical heating including its position, structure and movement. Not only are operators for the response of mean state variables considered but also operators that give the response of functionals of the state, including eddy variance and fluxes associated with the storm tracks.
CLPW01 24th August 2010
15:30 to 16:30
The spectra of a general class of stochastic climate models
The simplest class of stochastic models relevant to geophysical applications consist of a linearization of the dynamical system and the addition of constant multivariate stochastic forcing. Such stochastic systems are known as finite dimensional Ornstein Uhlenbeck systems and have wide application. In this talk we describe a general decomposition of the equilibrium spectrum of such processes. This is of interest in applications since spectra of long time series are commonly robustly defined from observations. We apply this formalism to the case of ENSO where it is often argued that there is a dominant normal mode. Here we argue that the decadal part of the ENSO spectrum can be simply explained by the stimulation of the cross spectrum of the dominant normal mode. The cross spectrum is dependent on the ENSO cycle phase meaning that this mechanism implies that the different ENSO phases have different spectral strengths at decadal frequencies.
CLPW01 25th August 2010
10:00 to 11:00
M Thompson & J Sieber Climate tipping as a noisy bifurcation: a predictive technique
In the first half of this contribution (speaker JMTT) we review the bifurcations of dissipative dynamical systems. The co-dimension-one bifurcations, namely those which can be typically encountered under slowly evolving controls, can be classified as safe, explosive or dangerous. Focusing on the dangerous events, which could underlie climate tippings, we examine the precursors (in particular the slowing of transients) and the outcomes which can be indeterminate due to fractal basin boundaries. It is often known, from modelling studies, that a certain mode of climate tipping is governed by an underlying bifurcation. For the case of a so-called fold, a commonly encountered bifurcation (of the oceanic thermohaline circulation, for example), we estimate (speaker JS) how likely it is that the system escapes from its currently stable state due to noise before the tipping point is reached. Our analysis is based on simple normal forms, which makes it potentially useful whenever this type of tipping is identified (or suspected) in either climate models or measurements. Drawing on this, we suggest a scheme of analysis that determines the best stochastic fit to the existing data. This provides the evolution rate of the effective control parameter, the (parabolic) variation of the stability coefficient, the path itself and its tipping point. By assessing the actual effective level of noise in the available time series, we are then able to make probability estimates of the time of tipping. In this vein, we examine, first, the output of a computer simulation for the end of greenhouse Earth about 34 million years ago when the climate tipped from a tropical state into an icehouse state with ice caps. Second, we use the algorithms to give probabilistic tipping estimates for the end of the most recent glaciation of the Earth using actual archaeological ice-core data.
CLPW01 25th August 2010
11:30 to 12:30
Rate-dependent tipping points: the example of the compost-bomb instability
This paper discusses rate-dependent tipping points related to a novel excitability type where a (globally) stable equilibrium exists for all different fixed settings of a system's parameter but catastrophic excitable bursts appear when the parameter is increased slowly, or ramped, from one setting to another. Such excitable systems form a singularly perturbed problem with at least two slow variables, and we focus on the case with locally folded critical manifold. Our analysis based on desingularisation relates the rate-dependent tipping point to a canard trajectory through a folded saddle and gives the general equation for the critical rate of ramping. The general analysis is motivated by a need to understand the response of peatlands to global warming. It is estimated that peatland soils contain 400 to 1000 billion tonnes of carbon, which is of the same order of magnitude of the carbon content of the atmosphere. Recent work suggests that biochemical heat release could destabilize peatland above some critical rate of global warming, leading to a catastrophic release of soil carbon into the atmosphere termed the ``compost bomb instability''. This instability is identified as a rate-dependent tipping point in the response of the climate system to anthropogenic forcing (atmospheric temperature ramping).
CLPW01 25th August 2010
14:00 to 15:00
Delivering local-scale climate scenarios for impact assessments
Process-based models, used in assessment of impact of climate change, require daily weather as one of their main inputs. The direct use of climate predictions from global or regional climate models could be problematic, because the coarse spatial resolution and large uncertainty in their output at a daily scale, particularly for precipitation. Output from a climate model requires application of various downscaling techniques, such as weather generator (WG). WG is a model which, after calibration of site parameters with observed weather, is capable of simulating synthetic daily weather that are statistically similar to observed. By altering the site parameters using changes in climate predicted from climate models, it is possible to generate daily weather for the future. A dataset, ELPIS, of local-scale daily climate scenarios for Europe has been developed. This dataset is based on 25 km grids of interpolated daily precipitation, minimum and maximum temperatures and radiation from the European Crop Growth Monitoring System (CGMS) meteorological dataset and climate predictions from the multi-model ensemble of 15 global climate models that were used in the IPCC 4th Assessment Report. The site parameters for the distributions of climatic variables have been estimated by the LARS-WG weather generator for nearly 12 000 grids in Europe for the period 1982–2008. The ability of LARS-WG to reproduce observed weather was assessed using statistical tests. This dataset was designed for use in conjunction with process-based impact models (e.g. crop simulation models) for the assessment of climate change impacts in Europe. A climate scenario generated by LARS-WG for a grid represents daily weather at a typical site from this grid that is used for agricultural production. This makes it different from the recently developed 25 km gridded dataset for Europe (E-OBS), which gives the best estimate of grid box averages to enable direct comparison with regional climate models.
CLPW01 25th August 2010
15:30 to 16:30
Biases and uncertainty in multi-model climate projections
The ensemble approach has originally been derived in probabilistic medium-range weather forecasting, and is now broadly used in numerical weather prediction, seasonal forecasting and climate research on a wide range of time scales. Applications geared towards climate projections are usually based on a heterogeneous ensemble with typically a mere handful of ensemble members, stemming from different models in an only partly coordinated framework. An important feature of ensemble approaches in climate research is the inability to rigorously quantify climate model biases. While biases of climate models are monitored for the control period, the lack of long-term comprehensive observations (on the centennial time-scales considered) implies that it is difficult to decide how the model biases will change with the climate state. In contrast to other studies, we look not only at 20 or 30 year averages, but also at the interannual variability. This allows us to consider additive and multiplicative biases. In the talk, I will discuss two plausible assumptions about the extrapolation of additive biases, referred to as the ``constant bias'' and ``constant relation'' assumptions. The former is used implicitly in most studies of climate change. The latter asserts that over-/underestimation of the interannual variability in the control period leads also to over-/underestimation of climate change, and this assumption is closely related to the statistical post-processing of seasonal climate predictions. In addition we explicitly allow the additive and multiplicative model biases to change between control and scenario periods, resolving the resulting lack of identifiability by the use of informative priors. An analysis of of GCM/RCM simulations from the ENSEMBLES project shows that bias assumptions critically affect the results for several regions and seasons.
CLPW01 25th August 2010
16:45 to 17:45
P Cox Model resolution versus ensemble size: optimizing the trade-off for finite computing resources
CLPW01 26th August 2010
10:00 to 11:00
Life, hierarchy, and the thermodynamic machinery of planet Earth
Throughout Earth’s history, life has increased greatly in abundance, complexity, and diversity. At the same time, it has substantially altered the Earth’s environment, evolving some of its variables to states further and further away from thermodynamic equilibrium. For instance, concentrations in atmospheric oxygen have increased throughout Earth's history, resulting in an increased chemical disequilibrium in the atmosphere as well as an increased redox gradient between the atmosphere and the Earth's reducing crust. These trends seem to contradict the second law of thermodynamics, which states for isolated systems that gradients and free energy are dissipated over time, resulting in a state of thermodynamic equilibrium. This seeming contradiction is resolved by considering planet Earth as a coupled, hierarchical and evolving non-equilibrium thermodynamic system that has been substantially altered by the input of free energy generated by photosynthetic life. Here, I present this hierarchical thermodynamic theory of the Earth system. I first present simple considerations to show that thermodynamic variables are driven away from a state of thermodynamic equilibrium by the transfer of power from some other process and that the resulting state of disequilibrium reflects the past net work done on the variable. This is applied to the processes of planet Earth to characterize the generation and transfer of free energy and its dissipation, from radiative gradients to temperature and chemical potential gradients that result in chemical, kinetic, and potential free energy and associated dynamics of the climate system and geochemical cycles. The maximization of power transfer among the processes within this hierarchy is closely related to the proposed principle of Maximum Entropy Production (MEP). The role of life is then discussed as a photochemical process that generates substantial amounts of additional free energy which essentially skips the limitations and inefficiencies associated with the trans fer of power within the thermodynamic hierarchy of the planet. In summary, this perspective allows us to view life as being the means to transform many aspects of planet Earth to states even further away from thermodynamic equilibrium than is possible by purely abiotic means. In this perspective pockets of low-entropy life emerge from the overall trend of the Earth system to increase the entropy of the universe at the fastest possible rate. The implications of the theory presented here are discussed regarding fundamental deficiencies in Earth system modeling, applications of the theory to reconstructions of Earth system history, to evaluate human impacts and regarding the limits of renewable sources of free energy for future human energy demands.
CLPW01 26th August 2010
11:30 to 12:30
MEP and planetary climates: insights from a two-box climate model containing atmospheric dynamics
CLPW01 26th August 2010
14:00 to 15:00
Climate entropy production based on AOGCM diagnostics
Most investigations of the MEP hypothesis have used climate models which do not explicitly simulate the physics and dynamics of the climate system (such as Paltridge's model), or in which they are radically simplified (such as a dry GCM). We have instead concentrated on entropy analysis of the HadCM3 atmosphere-ocean general circulation model (AOGCM), the kind of model used for prediction of 21st-century global climate change. In the AOGCM, we diagnose the entropy sources and sinks directly from the diabatic heating terms. The rate of material entropy production of the climate system (i.e. not including thermal equilibration of radiation) is about 50 mW m-2 K-1. The largest part of the material EP (about 38 mW m-2 K-1), is due to sensible and latent heat transport. When we vary parameters in the physical formulation of the AOGCM, MEP might suggest that the most realistic version is the one with the largest EP. However, in the AOGCM there is no maximum in EP, for two reasons. First, the strongest influence on EP is the throughput of energy from the net shortwave absorption, which is very sensitive to model parametrisation, rather than the anticorrelation of heat flux and temperature gradient seen in simple models when net shortwave absorption is fixed. This dependence comes particularly from the dominance of EP by the hydrological cycle, which intensifies monotonically with the global average temperature. Second, the EP predominantly comes from vertical heat transport, and to achieve a maximum with fixed shortwave heating implies an unrealistic vertical temperature gradient and/or unphysical longwave emissivity. There is, however, a maximum in KE dissipation in the atmosphere, similar to Lorenz's (1960) conjecture, associated with a smaller part of the material EP (about 13 mW m-2 K-1).
CLPW01 27th August 2010
10:00 to 11:00
Empirical stochastic modelling in weather and climate science: applications from subgrid-scale parametrisation to analysis & modelling of palaeoclimatic records
The dynamics of weather and climate encompass a wide range of spatial and temporal scales which are coupled through the nonlinear nature of the governing equations of motion. A stochastic climate model resolves only a limited number of large-scale, low-frequency modes; the effect of unresolved scales and processes onto the resolved modes is accounted for by stochastic terms. Here, such low-order stochastic models are derived empirically from time series of the system using statistical parameter estimation techniques.

The first part of the talk deals with subgrid-scale parametrisation in atmospheric models. By combining a clustering algorithm with local regression fitting a stochastic closure model is obtained which is conditional on the state of the resolved variables. The method is illustrated on the Lorenz '96 system and then applied to a model of atmospheric low-frequency variability based on empirical orthogonal functions.

The second part of the talk is concerned with deriving simple dynamical models of glacial millennial-scale climate variability from ice-core records. Firstly, stochastically driven motion in a potential is adopted. The shape of the potential and the noise level are estimated from ice-core data using a nonlinear Kalman filter. Secondly, a mixture of linear stochastic processes conditional on the state of the system is used to model ice-core time series.
CLPW01 27th August 2010
11:30 to 12:30
Stochstic representation of model uncertainties in ECMWF's forecasting system
The Integrated Forecasting System (IFS) is a sophisticated software system for weather forecasting, which was jointly developed by the European Centre for Medium-Range Weather Forecasts (ECMWF) and Meteo France. All applications needed for generating operational weather forecasts are included, such as data assimilation, atmospheric model and post processing. The IFS is used for deterministic 10 day forecasts and ensemble forecasts with forecast ranges from 15 days for the medium range EPS, 32 days for the monthly forecast up to 13 month for the seasonal forecasts. In addition to a good deterministic forecast model as basis of the ensemble prediction system, the ingredients needed to produce good ensemble forecasts are realistic and appropriate representations of the initial and model uncertainties. The stochastic schemes used for the model error representation will be presented. These are the Spectral Stochastic Backscatter Scheme (SPBS) and the Stochastically Perturbed Parametrization Tendency Scheme (SPPT). The basis of both schemes is a random spectral pattern generator, in which the spectral coefficients are evolved with a first order auto-regressive process. The resulting pattern varies smoothly in space and time with easy to control spatial and temporal correlation. The two schemes address different aspects of model error. SPPT addresses uncertainty in existing parametrization schemes, as for example parameter settings, and therefore generalizes the output of existing parametrizations as probability distributions. SPBS on the other hand describes upscale energy transfer related to spurious numerical dissipation as well as the upscale energy transfer from unbalanced motions associated with convection and gravity waves, process missing in conventional parametrization schemes. Cellular Automata (CA) are an alternative way for generating random patterns with temporal and spatial correlations. A pattern generator based on a probabilistic CA was implemented in the IFS. The implementation allows the interaction of model fields with the CA, i.e. the characteristics of the CA are influenced by the atmospheric state. The impact of the stochastic schemes on the forecast skill will be presented for different forecast ranges.
CLPW01 27th August 2010
14:00 to 15:00
Model uncertainty in weather and climate models: Stochastic and multi-physics representations
A multi-physics and a stochastic kinetic-energy backscatter scheme are employed to represent model uncertainty in a mesoscale ensemble prediction system using the Weather Research and Forecasting model. Both model-error schemes lead to significant improvements over the control ensemble system that is simply a downscaled global ensemble forecast with the same physics for each ensemble member. The improvements are evident in verification against both observations and analyses, but different in some details. Overall the stochastic kinetic-energy backscatter scheme outperforms the multi-physics scheme, except near the surface. Best results are obtained when both schemes are used simultaneously, indicating that the model error can best be captured by a combination of multiple schemes.
CLPW01 27th August 2010
15:30 to 16:30
The impacts of stochastic noise on climate models
Our understanding of the climate system has been revolutionized by the development of sophisticated computer models. Yet, these models are not perfect representations of reality, because they remove from explicit consideration many physical processes which are known to be key aspects of the climate system, but which are too small or fast to be modelled. Examples of such processes include gravity waves. This talk will give several examples of implementations and impacts of stochastic sub-grid representations in atmosphere and ocean models.
CLPW01 27th August 2010
16:45 to 17:45
R Plant Issues with convection. What is a useful framework beyond bulk models of large N, non-interacting, scale-separated, equilibrium systems
The representation of cumulus clouds presents some notoriously stubborn problems in climate modelling. The starting point for our representations is the well-known Arakawa and Schubert (1974) system which describes interactions of cloud types ("plumes") with their environment. In some ways, this system has become brutally simplified: in applications, generally only a single "bulk" cloud type is considered, there are assumed to be very many clouds present, and an equilibrium between convection and forcing is assumed to be rapidly reached. In other ways, the system has become greatly complicated: the description of a plume is much more "sophisticated". In this talk, I want to consider what might be learnt from almost the opposite perspective: i.e., keep the plume description brutally simple, but take seriously the implications of issues like finite cloud number (leading naturally to important stochastic effects), competitive communities of cloud types (leading to a proposed relation for the co-existence of shallow and deep convection) and prognostic effects (leading to questions about how far equilibrium thinking holds).
CLP 2nd September 2010
10:00 to 11:00
Towards a theory of the North Atlantic oscillation
CLP 2nd September 2010
14:00 to 15:00
Recovering past dynamics from a computer model: inversion of a vegetation model for paleoclimate reconstruction
CLP 3rd September 2010
10:00 to 11:00
Inferring past ice: Bayesian calibration of a 3D glacial system model for the last deglaciation: methodology, challenges, and lessons
CLP 3rd September 2010
14:00 to 15:00
Not-implausible statistical climate modelling
CLP 6th September 2010
14:00 to 15:00
Towards a mathematical theory of climate sensitivity.
CLP 7th September 2010
10:00 to 11:00
A helpful way to think about multi-model ensembles and the actual climate
CLP 8th September 2010
11:00 to 12:00
V Lucarini A statistical mechanical apporach for the computation of the climatic response to general forcing
The climate belongs to the class of non-equilibrium forced and dissipative systems, for which most results of quasi-equilibrium statistical mechanics, including the fluctuation-dissipation theorem, do not apply. We show for the first time how the Ruelle linear response theory, developed for studying rigorously the impact of perturbations on general observables of non-equilibrium statistical mechanical systems, can be applied to analyze the climatic response. We choose as test bed the Lorenz 96 model, which has a well-recognized prototypical value. We recapitulate the main aspects of the response theory and propose some new results. We then analyze the frequency dependence of the response of both local and global observables to perturbations with localized as well as global spatial patterns. We derive analytically the asymptotic behaviour, validity of Kramers-Kronig relations, and sum rules for the susceptibilities, and related them to parameters describing the unperturbed properties of the system. We verify the theoretical predictions from the outputs of the simulations with great precision. The theory is used to explain differences in the response of local and global observables, in defining the intensive properties of the system and in generalizing the concept of climate sensitivity to all time scales. We also show how to reconstruct the linear Green function, which maps perturbations of general time patterns into changes in the expectation value of the considered observable. Finally, we propose a general methodology to study Climate Change problems by resorting to few, well selected simulations and discuss the specific case of surface temperature response to changes of the $CO_2$ concentration. This approach may provide a radically new perspective to study rigorously the problem of climate sensitivity and climate change.
CLP 9th September 2010
14:00 to 15:00
ABC and other challenges in computational statistics
CLP 15th September 2010
10:00 to 11:00
Emulation and calibration of a flood inundation model for climate risk analysis
CLP 15th September 2010
14:00 to 15:00
Physics of the Atlantic multidecadal oscillation
CLP 16th September 2010
11:00 to 12:00
J Tribbia How to break quasi-geostrophic turbulence
CLP 16th September 2010
14:00 to 15:00
How we build a parameterisation scheme: illustrated by Langmuir turbulence in the ocean mixed layer
CLP 17th September 2010
11:00 to 12:00
A Fournier Development of wavelet methodology for weather Data Assimilation
This work aims at improved computation of covariances and of multiscale structures such as clouds, in the Weather Research and Forecasting (WRF) data-assimilation (WRFDA) system, in particular the horizontal factor of the control-variable transform used to optimize the forecast initialization. Better representation can be achieved in the horizontal transform by wavelet-compression techniques that have been proven in many other applications.

In this work, two past obstacles to effective incorporation of wavelets in limited-area models such as WRF are resolved: isometric-injective (i.e., energy preserving, left-invertible) wavelets avoid boundary-condition assumptions at any scale; and these wavelets can be applied to non-dyadic data lengths. A summary technical description of these improved wavelets and their implementation into WRFDA is presented. By retaining only a diagonal background-covariance matrix in wavelet space, appropriate heterogeneity is obtained for the model-space covariances.

A second wavelet application is to partition observation error into a part due to poor representation (e.g., too-coarse resolution), and a residual, using a novel criterion in wavelet space. Other methods to construct inhomogeneous anisotropic covariance models are cited, and other potential technical improvements are discussed.

CLPW02 21st September 2010
09:30 to 10:15
Outstanding problems in probabilistic prediction of climate
CLPW02 21st September 2010
10:15 to 11:00
Model inadequacies and physical processes
CLPW02 21st September 2010
14:00 to 14:45
R Chandler Methodologies for probabilistic uncertainty assessment
CLPW02 21st September 2010
14:45 to 15:30
Probabilistic methodology used for UKCIP
CLPW02 21st September 2010
16:00 to 16:15
Probabilistic use of climate catastrophe multi-models
CLPW02 22nd September 2010
09:30 to 10:15
A Dempster Non-probabilistic frameworks
CLPW02 22nd September 2010
10:15 to 11:00
Probabilistic frameworks
CLP 30th September 2010
14:00 to 15:00
Sir Gilbert Walker and a connection between El Nino and statistics
CLP 6th October 2010
10:00 to 11:00
Understanding uncertainty in climate model components
CLP 7th October 2010
14:00 to 15:00
Bayesian calibration of the Thermosphere-Ionosphere Electrodynamics General Circulation Model (TIE-GCM)
CLP 11th October 2010
10:00 to 11:00
P Challenor How to design a climate ensemble
CLP 12th October 2010
10:00 to 11:00
A variance constraining Kalman filter for data assimilation
Data assimilation aims to solve one of the fundamental problems of numerical weather prediction - estimating the optimal state of the atmosphere given a numerical model of the dynamics, and sparse, noisy observations of the system. A standard tool in attacking this filtering problem is the Kalman filter.

We consider the problem when only partial observations are available. In particular we consider the situation where the observational space consists of variables which are directly observable with known observational error, and of variables of which only their climatic variance and mean are given. We derive the corresponding Kalman filter in a variational setting.

We analyze the variance constraining Kalman filter (VCKF) filter for a simple linear toy model and determine its range of optimal performance. We explore the variance constraining Kalman filter in an ensemble transform setting for the Lorenz-96 system, and show that incorporating the information on the variance on some un-observable variables can improve the skill and also increase the stability of the data assimilation procedure.

Using methods from dynamical systems theory we then investigate systems where the un-observed variables evolve deterministically (but chaotically) on a fast time scale.

This is joint work with Lewis Mitchell and Sebastian Reich.

CLP 13th October 2010
10:00 to 11:00
Using simple models and rigorous mathematics to improve operational atmosphere and ocean modelling
CLP 14th October 2010
10:00 to 11:00
Stochastic model reduction on manifolds
This talk presents challenges and first results on how to do stochastic parametrizations for deterministic systems with some underlying conservation laws such as energy conservation. We introduce the problem, and then consider a time scale separated Hamiltonian toy model where the fast chaotic dynamics will be modelled by noise in such a way as to preserve the energy conservation of the full deterministic system. The reduction is done in two stages: First projecting the noise onto the energy manifold, and then subsequently performing singular perturbation theory.

This is joint work with David Lewis.

CLP 14th October 2010
14:00 to 15:00
A perspective on turbulent flows: cascade dynamics in rotating flows
CLP 15th October 2010
10:00 to 11:00
Predictability of the 2nd kind: numerical discretisations and their climatic response
CLP 18th October 2010
10:00 to 11:00
S Reich Predicting the unpredictable: combining models and data
CLP 18th October 2010
12:00 to 13:00
Regimes of global atmospheric circulation
CLP 19th October 2010
10:00 to 11:00
W Dewar Inviscid dissipation of balanced flow by topography
CLP 19th October 2010
11:15 to 12:15
Likely temporal abrupt shifts in the carbon cycle
CLP 20th October 2010
10:00 to 11:00
C Penland Stochastic diagnosis of climate dynamics, with application to tropical SSTs
CLP 20th October 2010
12:00 to 13:00
Predicting extremes in the midlatitudinal atmospheric circulation using regime-dependent statistical modelling
CLP 20th October 2010
14:00 to 15:00
S Shin Applications of Hamiltonian particle-mesh methods for atmospheric modelling
CLP 21st October 2010
10:00 to 11:00
Changes in weather and climate systems, predictable and unpredictable
CLP 21st October 2010
12:00 to 13:00
Rate-dependent tipping points - a comparative approach
CLP 25th October 2010
10:00 to 11:00
M Davey Aspects of long-range forecasting and ENSO
CLP 26th October 2010
10:00 to 11:00
Multi-model ensembles
CLP 27th October 2010
10:00 to 11:00
Estimates of the ocean heat budget in the Gulf stream
The Northwest Atlantic has the largest annually averaged oceanic surface heat loss in the world's ocean, with a wintertime spatial average of over 200 W/m2. In addition, it is at this location in the Northern Hemisphere that the zonally averaged meridional heat transport by the ocean reduces abruptly to below 0.5 PetaWatts with the atmosphere taking over the bulk of heat transport. Interannual changes of the exchange of heat between the ocean and the atmosphere are large in this region with magnitudes exceeding 50 W/m2 or more averaged over the Gulf Stream, equivalent about to 0.1 PetaWatts. To quantify the source of interannual changes in surface heat flux and heat storage, we examine the ocean heat budget in four different ocean models, a 1/10-degree prognostic ocean model (POP), a 1/3-degree assimilative model of the North Atlantic (Mercator), a diagnostic model, and a 18km degree Green;s function assimilation model (ECCO2). The last three models solutions all depend directly on the observed altimetric sea surface height. We focus the Gulf Stream once it leaves the coast. While the models disagree in detail, they do agree on some points. The ocean can locally store substantial quantities of heat on theses times scales. The changes in heat content on interannual time scales are concentrated in the upper 400 m of the water column while for monthly times scales, they are concentrated in the upper 100 m. Monthly heat content changes are controlled by the atmosphere with the ocean lagging the surface heat flux by one month. On interannual times scales, heat content changes are controlled by horizontal heat transport convergence with only a secondary role for the surface heat fluxes. At the same time, the surface fluxes are negatively correlated with the oceanic heat content, lagging it by 3 months. This indicates that the surface fluxes damp the heat content changes in the ocean instead of forcing them, suggesting an active role of ocean dynamics in the climate system. Additional comments on the use of ocean models for studies of the ocean's role in climate are also given.
CLP 9th November 2010
10:00 to 11:00
J Thuburn Energy and enstrophy cascades in numerical models
CLP 12th November 2010
10:00 to 11:00
Methods of time series analysis for climate tipping points
CLP 16th November 2010
10:00 to 11:00
B Bates & R Chandler Climate change Down Under: challenges, opportunities and uncertainty
CLP 17th November 2010
14:00 to 15:00
New thinking about statistical information and its applications to climate prediction
Bayesian models and methods have arguably been the gold standard for probabilistic assessment of scientific uncertainty for 250 years. The theory of belief functions, also known as Dempster-Shafer (DS) theory referring to its founds in the 1960s and 1970s, has the potential to break through limitations and confusions that are the legacy of the major conceptual and technical advances of the 20th Century. I start with a brief sketch of sources, motivations, and key elements of DS, including why this may be a propitiuous time for a new look at a not-so-new formalism. For statisticians, key ideas are that probabilities are personal and extend beyond the Bayesian paradigm to include probabilities of "don't know," providing new tools for addressing problems of robustification, multiparameter estimation, and nonidentifiability. For climateology, DS opens new ways to think about what is predictable and what is not predictable.
CLP 22nd November 2010
14:00 to 15:00
R Niven Derivation of a local form of the Maximum Entropy Production principle
CLP 23rd November 2010
10:00 to 11:00
Fast Bayesian palaeoclimate reconstruction
OFB006 24th November 2010
14:05 to 14:30
Estimating and reducing uncertainty in climate prediction: key findings from the Newton Institute Programme
Can we ever model climate accurately? Can we detect early warning signs of dramatic climate change? How can climate science help create a greener economy?
OFB006 24th November 2010
14:30 to 16:00
Panel discussion - The scientific uncertainties and their implications
Chair: Jonathan Leake (Science & Environment Editor, The Sunday Times). Panel: Tim Lenton (Earth System Modelling, University of East Anglia); Tim Palmer; Vicky Pope (Head of Climate Change Advice, the Met Office); Alan Thorpe (Chief Executive, Natural Environment Research Council).
OFB006 24th November 2010
16:30 to 18:00
Panel discussion - Policy in the face of uncertainty
Chair: Oliver Morton (Energy and Environment Editor, The Economist). Panel: Sir John Beddington (Government Chief Scientific Adviser); Ralph Cicerone (President, National Academy of Sciences of the USA); Abyd Karmali (Managing Director, Global Head of Carbon Markets, Bank of America Merrill Lynch); Lord Adair Turner (Chairman, Financial Services Authority, and Committee on Climate Change).
CLP 25th November 2010
10:00 to 11:00
V Lucarini Entropy production and efficiencyin the climate system
CLP 26th November 2010
10:00 to 11:00
R Niven Application of the Maximum Entropy Production principle to turbulent fluid mechanics and planetary systems
Recently, a conditional form of the Maximum Entropy Production (MaxEP) principle was derived based on Jaynes' maximum entropy method, using an entropy defined on the fluxes through an infinitesimal element of a flow system. The analysis is analogous to Gibbs' formulation of equilibrium thermodynamics, expressing the balance between changes in entropy inside and outside a system, but is formulated for a non-equilibrium flow system. The MaxEP principle emerges as the extremum condition in some circumstances. In this seminar, a generic form of the derivation is first provided, encompassing seemingly disparate formulations of equilibrium thermodynamics, local steady-state flow and global steady-state flow (and other systems). The implications of the analysis are explored for several systems, including (i) the transition between laminar and turbulent flow in a pipe, and (ii) the modelling of planetary climate systems, including solar and extrasolar planets.
CLP 2nd December 2010
14:00 to 15:00
A system-biology investigation of heat shock protein regulated gene networks: mathematical models, predictions and laboratory experiments
CLPW04 6th December 2010
14:00 to 15:00
K Horsburgh & J Huthnance Overview of Newton Institute "climate" programme
This talk will give an overview of the activities, emerging issues and findings from the programme. These include:

  • the statistical combination of scenarios, of model ensembles and of parametrisation values, including constraints from observations, all with their related uncertainties
  • the approach to inferences from paleo-data
  • some developments in identifying approaches to "tipping points"
  • contributions that reduced models may make to our understanding.
  • CLPW04 6th December 2010
    15:30 to 16:30
    Synthesising Model Projections of Future Climate Change
    It has never been a more exciting time to be a climate modeler. International coordination of model experiments by the Coupled Model Intercomparison Project (CMIP5) will see the estimated production of over 2 petabytes of model output from 20 modeling centers and 40 different climate models (downloading the data on a home broadband would take 25 years). There will be new model functionality in terms of the processes represented in the models including chemistry and biology, new forcing scenarios including palaeoclimate and idealized cases and new experiments initialized with observations to look at near-term climate variability and change. Moreover there is an unprecedented interest in, and scrutiny of, climate model projections from fellow scientists, from the public and from governments.

    How on earth are we to make sense of this information overload? This talk will review some of the approaches that we expect will be used to analyses the new CMIP5 multi-model database. Some approaches rely on physical understanding of the climate system to make sense of the data. Some use simple statistical approaches to rationalize the output. In some specific cases, more complex statistical approaches may be applied carefully. Finally, all the approaches will have to be synthesized to provide a summary of the state of climate modeling science. This challenge will be discussed.
    CLPW04 6th December 2010
    17:00 to 18:00
    After Climategate and Cancun; What Next for Climate Science?
    The last year has been a difficult time for climate science, with leaked emails undermining public confidence and perhaps contributing to the failure of Copenhagen to reach an agreement on emissions cuts. On top of this, mid-term elections in the US suggest it will be difficult for President Obama to carry into legislation any substantial agreements on emissions cuts that may be made in Cancun, making the chances of such agreements less likely in the first place.

    How does climate science move forward in the light of these events? The evidence above suggests, whether we like it or not, that the arguments of so-called climate "sceptics", that the cost of major emissions cuts is not justified given existing uncertainties in climate predictions, have substantial political traction. Hence I believe that we are unlikely to move from the current stalemate without further advancing the science of climate change, in particular without reducing these uncertainties substantially. But this is not an easy task. In this talk I will review why these uncertainties exist in the first place. Ultimately, as we shall see, the issue is mathematical, we know the equations of climate at the level of partial differential equations, but we do not know how to solve these equations without at the same time producing biases and errors which are as large as the climate signal we are trying to predict. I will outline two new areas of work, which have been a focus of activity at the Isaac Newton Institute over the last four months, designed to reduce uncertainty in climate prediction. One is in the area of stochastic closure schemes for climate models, the other is in the area of data assimilation.

    Putting this new science into practice, however, is not straightforward, and will require new computing infrastructure hitherto unavailable to climate science. Hence, I will conclude with a plea to the governments of the world. Let's take the current stalemate of opinion as justifying a renewed effort to do all we humanly can to reduce existing uncertainties in predictions of climate change, globally and regionally, so we can move the argument forward, one way or the other, for the good of humanity. This will require a new sense of dedication both by scientists and by politicians around the world: by scientists to focus their efforts on the science needed to reduce uncertainties, and by politicians to fund the technological infrastructure needed to enable this science to be done as effectively and speedily as possible.
    CLPW04 7th December 2010
    09:30 to 10:30
    Climate modelling at Quaternary time scales
    Climate changes at time scales of several (tens) of thousands years like glacial interglacial cycles may be viewed as an 'emergent feature' of the climate system. They can be understood at different levels : from a general circulations prospective (how changes in astronomical elements affect the hydrological, nutrient and carbon cycles); from a dynamical system prospective (locate and characterize bifuraction points, detect synchronisation phenemena, identify couplings between different time scales... ); or from a statistical prospective (estimate the probability of events and assess the predictability of the climate system at these time scales). The ambition of a general theory Pleistocene climate is to merge these approaches.

    The recent mathematical developments reviewed during the present Newton Institute constitute promising avenues to this end. For example, statistical emulators allow to explore in depth the input and parameter spaces of general circulation simulators, including their sensitivity to the astronomical forcing. Monte-Carlo statistical methods allow to calibrate low-order stochastic dynamical systems, and guide the process of criticism and selection of models. The purpose of this talk is to summarise advances gained during the Newton Institute along these lines.
    CLPW04 7th December 2010
    11:00 to 11:40
    A statistical emulator for HadCM3
    As part of the PalaeoQUMP project, we would like to emulate HadCM3 North American Mid-Holocene summer temperature (MTWA) anomalies, at the resolution of the simulator (3.75 by 2.5 degrees). This will allow us to calibrate the parameterisation of HadCM3 to palaeoclimate measurements at the gridcell level, and to do continental-scale reconstructions based on these measurements.

    Emulation has been a major theme during this programme. In the broader field of Computer Experiments its utility is clear, and its application to simulators with scalar outputs has now become standard. However, jointly emulating a large and structured collection of simulator outputs, such as a spatial field of climate quantities, is much more challenging. During this programme we have made progress on both the principles of such an emulation, notably the role of dimensional reduction and of separability, the computational aspects, and also in building an actual emulator for HadCM3.

    This talk will focus on the practical aspects of building an emulator, and show how our emulator can be used to learn about HadCM3, and to learn about the value of palaeoclimate measurements for parameter calibration.
    CLPW04 7th December 2010
    11:40 to 12:30
    Assessing climate uncertainty: models, meaning and methods
    CLPW04 7th December 2010
    14:00 to 15:00
    D Stephenson Grand Challenges in Probabilistic Climate Prediction
    This talk will summarise the unsolved mathematical and statistical challenges in probabilistic climate prediction that have emerged from theme B of this programme and the successful satellite workshop organised by the University of Exeter in September 2010.

    Related Links •http://www.newton.ac.uk/programmes/CLP/clpw02p.html - Exeter workshop talks and videos •http://empslocal.ex.ac.uk/iniw - Exeter workshop wiki pages
    CLPW04 7th December 2010
    16:30 to 17:00
    R Chandler Rewarding strength, discounting weakness: combining information from multiple climate simulators
    Although modern climate simulators represent our best available understanding of the climate system, projections can vary appreciably between them. Increasingly therefore, users of climate projections are advised to consider information from an "ensemble" of different simulators or "multimodel ensemble" (MME).

    When analysing a MME the simplest approach is to average each quantity of interest over all simulators, possibly weighting each simulator according to some measure of "quality". This approach has two drawbacks. Firstly, it is heuristic: results can differ between weighting schemes, leaving users little better off than before. Secondly, no simulator is uniformly better than all others: if projections of several different quantities are required the rankings of the simulators (and hence the implied weights) may differ considerably between quantities of interest.

    A more sophisticated approach is to use modern statistical techniques to derive probability density functions (pdfs) for the quantities of interest. However, no systematic attempt has yet been made to sample the range of possible modelling decisions in building a MME: therefore it is not clear to what extent the resulting "probabilities" are in any way relevant to the downstream user.

    This talk presents a statistical framework that addresses all of these issues, building on Leith and Chandler (2010). The emphasis is on conceptual aspects, although the framework has been applied in practice elsewhere. A mathematical analysis of the framework shows that:

    (a) Information from individual simulators is automatically weighted, alongside that from historical observations and from prior knowledge. (b) The weights reflect the relative value of different information sources for each quantity of interest. Thus each simulator is rewarded for its strengths, whereas its weaknesses are discounted. (c) The weights for an individual simulator depend on its internal variability, its expected consensus with other simulators, the internal variability of the real climate, and the propensity of simulators collectively to deviate from the real climate. (d) Some subjective judgements are inevitable.

    Reference: Leith, N.A. and Chandler, R.E. (2010). A framework for interpreting climate model outputs. J. R. Statist. Soc. C 59(2): 279-296.
    CLPW04 7th December 2010
    17:00 to 17:30
    Adventures in Emulation
    Emulators are widely used in climate modelling and in the analysis of computer experiments in general. In this talk I explore some ideas on how we might extend the way we build and exploit emulators to investigate climate problems. Particularly I will consider how extremal emulators can be used to look at extremes and how they change across the model input space; and how emulators can be used to improve the parameterisation of sub-grid scale processes.
    CLPW04 8th December 2010
    09:30 to 10:30
    P Cox New Newtonian Alchemy: Turning Noise into Signal
    This talk will summarise progress on the application of three strands of the Newton Institute Mathematical and Statistical Approaches to Climate Modelling and Prediction Programme, which are all connected through the general notion of turning noise into signal:

    (a) Maximum Entropy Production (MEP) principles of the climate system – application to surface-to-atmosphere turbulent fluxes. (b) Time-series techniques to give forewarning of climate tipping points – application to possible climate-driven Amazon forest dieback. (c) Fluctuation theories to determine climate system sensitivities – using interannual variability of CO2 as a constraint on the sensitivity of tropical land carbon to climate change.

    In the case of (b) and (c) short-timescale variability is used to determine the vulnerability of the system to an abrupt change and a warming-induced release of land carbon, respectively. In (a), the application of MEP to vertical turbulent fluxes puts these “noisy” fluxes at centre-stage in the climate system, rather than being treated as a residual term in the surface energy balance.

    The Newton Institute programme has stimulated progress in all three of these areas. This talk will summarize the current state-of-play and suggest priorities for follow-on research.
    CLPW04 8th December 2010
    10:30 to 11:00
    Homogenized Hybrid Particle Filters in Multi-scale Environments
    Multi-scale problems are commonly recognized for their complexity, yet the main challenge in multi-scale modeling is to recognize their simplicity, and make use of it to see how information interacts with these complex structures and scales. This paper outlines efficient methods that combine the information from the observations with the dynamics of coupled ocean - atmosphere models and seeks to improve decadal climate predictions by developing, testing, and improving the initialization of different components of the Earth system, particularly the ocean.

    We present a reduced-order particle filtering algorithm, the Homogenized Hybrid Particle Filter (HHPF), for state estimation in nonlinear multi-scale dynamical systems. This method combines stochastic homogenization with nonlinear filtering theory to construct an efficient particle filtering algorithm for the approximation of a reduced-order nonlinear filter for multi-scale systems. In this work, we show that the HHPF gives a good approximation of the conditional law of the coarse-grained dynamics in the limit as the scaling parameter goes to zero and the number of particles is sufficiently large.
    CLPW04 8th December 2010
    11:30 to 12:00
    The Community Integrated Assessment System, CIAS, and its user interface CLIMASCOPE
    Successful communication of knowledge to climate change policy makers requires the careful integration of scientific knowledge in an integrated assessment that can be clearly communicated to stakeholders, and which encapsulates the uncertainties in the analysis and conveys the need for using a risk assessment approach. It is important that (i) the system is co-designed with the users (ii) relevant disciplines are included (iii) assumptions made are clear (iv) the robustness of outputs to uncertainties is demonstrated (v) the system is flexible so that it can keep up with changing stakeholder needs and (vi) the results are communicated clearly and are readily accessible. The "Community Integrated Assessment System" (CIAS) is a unique multi-institutional, modular, and flexible integrated assessment system for modeling climate change which fulfils the above criteria. It differs from other integrated models in being flexible, allowing various combinations of component modules, to be connected together into alternative integrated assessment models. Modules may be written at different institutions in different computer languages and/or based on different operating systems. Scientists are able determine which particular CIAS coupled model they wish to use through a web portal. This includes the facility to implement Latin hypercube experimental design facilitating formal uncertainty analysis. Further exploration of robustness is possible through the ability to select, for example, alternative climate models to address the same questions. It has been applied to study climate change mitigation, through for example the AVOIDing dangerous climate change project for the UK Department of Energy and Climate Change (DECC), in which the avoided impacts (benefits) of alternative climate policies were compared to no-policy baselines. AVOID is a DECC funded initiative which allows fine tuning of research tools and outputs for the use of policy makers. A second web portal, CLIMASCOPE, is being developed to allow a wide range of users free access to regional climate change projections and encourage risk assessment through encapsulation of uncertainties. We have begun discussions on appropriate design and use of this tool with stakeholders located in Mexico and Madagascar. Both the AVOID and CLIMASCOPE discussions provide fora in which we can come to better understand stakeholder needs, and use this knowledge to guide the evolving design of CIAS.
    CLPW04 8th December 2010
    12:00 to 12:30
    Bayesian estimation of the climate sensitivity based on a simple climate model fitted to global temperature observations
    Bayesian estimation of the climate sensitivity based on a simple climate model fitted to global temperature observations

    Magne Aldrin, Norwegian Computing Center and University of Oslo Marit Holden, Norwegian Computing Center Peter Guttorp, Norwegian Computing Center and University of Seattle

    The climate sensitivity is a central parameter in understanding climate change. It is defined as the increase in global temperature due to a doubling of CO2 compared to pre-industrial time. Our aim is to estimate the climate sensitivity by modelling the relationship between (estimates of) radiative forcing and observations of global temperature and ocean heat content in post-industrial time. Complex general circulation models are computationally expensive for this purpose, and we use instead a simple climate model of reduced complexity. This climate model is deterministic, and we combine it with a stochastic model to do proper inference.

    Our combined model is

    yt = mt(xt-, S, theta) + nt

    Here, yt is the observed vector of global temperature and ocean heat content in year t and mt the corresponding output from the simple climate model. Furthermore, the model input xt- is the unknown radiative forcing in year t and previous years. S is the climate sensitivity which is the parameter of interest andtheta is a vector with other model parameters. Finally, nt is an autoregressive error term accounting for model errors and measurement errors. We use a flat prior for the climate sensitivity and informative priors for most other parameters.

    The model was fitted to observations of global temperatures from 1850 to 2007 and of ocean heat content from 1955 to 2007. The work is still in progress, so the estimate of the climate sensitivity is preliminary. However, this preliminary estimate is a few degrees Celsius above zero, which is comparable with other estimates.

    We believe that this approach is a valuable addition to other methods for estimating the climate sensitivity, where physical knowledge and observed data are linked together by statistical modelling and estimation methods.

    From a statistical point of view, it is an example of calibration of computer models, but with more emphasis on modelling the discrepancy between the observations and the computer model, than on using an emulator or surrogate model for the computer model, that has been central in much of the recent work in this area.
    CLPW04 8th December 2010
    15:30 to 16:10
    The importance of numerical time-stepping errors
    Comprehensive assessments of uncertainty in climate prediction models should in principle consider contributions from the discretised numerical schemes, as well as from the parameterised physics and other sources. The numerical contributions are often assumed to be negligible, however. This talk reviews the evidence for uncertainty arising from time-stepping schemes, and suggests a possible avenue for progress to reduce it.

    The context for the progress is that many climate models use the simple leapfrog scheme in concert with the Robert-Asselin filter. Unfortunately, this filter introduces artificial damping and degrades the formal accuracy, because of a conservation problem. A simple modification to the filter is proposed, which fixes the conservation problem, thereby reducing the artificial damping and increasing the formal accuracy.

    The Robert-Asselin-Williams (RAW) filter may easily be incorporated into existing climate models, via the addition of only a few lines of code that are computationally very inexpensive. Results will be shown from recent implementations of the RAW filter in various models. The modification will be shown to reduce model biases and to significantly improve the predictive skill.
    CLPW04 9th December 2010
    09:30 to 10:00
    N Hritonenko Engaging with policymakers: modelling of the optimal investments into environmental maintenance, abatement, and adaptation for long-term climate policies
    In order to design an efficient and sustainable climate policy, a crucial decision issue is to find the rational policy mix between environmental abatement and adaptation. An aggregated economic-environmental model is constructed that allows us to explore this topic analytically. The model is formulated in the classic Solow-Swan macroeconomic framework as an infinite-horizon social planner problem with the adaptation and abatement investments as separate decision variables. The utility function depends on the environmental quality and adaptation efforts. A qualitative analysis of the model demonstrates the existence of a unique steady state and leads to determining the optimal balance between investing into adaptation measures and emission abatement. The outcomes of the model investigation can be implemented in associated long-term environmental policies and regulations. The model is calibrated on available economic and climate data and predictions. Some other economic-environmental models with endogenous technological change, energy restrictions, and environmental CO2 quotas will be also briefly discussed.
    CLPW04 9th December 2010
    10:00 to 11:00
    R Street Lessons learned from striving to support decision and policy making: the challenges when providing climate information
    The necessity to provide climate information (historical and future climates) to support decision and policy making, particularly in the context of adaptation, raises many challenges. These challenges arise as there is an increasing need for information that can inform decisions and policy development and that same time reflects the evolving state of the climate science and adaptation theory and practice. These challenges include understanding uncertainties and different means of reflecting these in adaptation assessments. Addressing these challenges often means understanding the nature and processes involved in making the associated decisions and policies. It also means looking at the information from the perspective of extracting that which is necessary to inform these processes; more than just providing a description of the climate. There is also a need to for decisions and policies to ‘embrace’ the uncertainties. This presentation explores some of the lessons learned and the challenges still needing to be addressed towards achieving the desired balance and to enhance the value of climate information in supporting adaptation.
    CLPW04 9th December 2010
    11:30 to 12:00
    Is it going to get wetter or drier in Uganda?
    The humanitarian and development sector is wrestling with the issue of climate change. Many of our beneficiaries in Africa, Asia and Latin America are reporting changes in weather and climate; yet, are these changes a result of natural perturbations in climate or a manifestation of anthropogenic climate change? Unless we know what we are dealing with we may respond to the threat of a changing climate inappropriately and, thereby, create vulnerability rather than reduce it – the aim of our work. We therefore need answers from the scientists. Understandably, it is impossible for climate scientists to say whether any single weather event can be attributed to climate change but we do need to know whether, for example, the current drought in East Africa is a result of climate change or a natural part of the cyclical climate of that region. If we cannot answer this ‘simple’ question then one has to ask whether climate modelling can offer anything of value to those in the humanitarian and development sector who are trying to support adaptation strategies to climate change. We may as well look at past and current trends and adapt to what we see rather than base our response strategies on abstract representations of the future which are fraught with uncertainty – climate models. Scientists are expert at saying what they cannot say about climate change but they are notoriously bad at saying what can be said about the impacts of climate change. As such, those dealing with the impacts of climate change are starting to question the role of climate science in their work – does it really have a role to play in helping people adapt to climate change? If so, what is this role? This paper will ask the simple question: ‘Is it going to get wetter or drier in Uganda’? It is hoped that the answer to this question will help the climate scientists appreciate more fully the challenges faced by the humanitarian and development sector as it responds to what appears to be an increasing incidence of ‘strange’ weather.
    CLPW04 9th December 2010
    12:00 to 12:30
    P Thornton What does the agricultural research-for-development community need from climate and weather data?
    Agricultural development in many parts of the tropics, particularly sub-Saharan Africa, faces daunting challenges, which climate change and increasing climate variability will compound in vulnerable areas. A plus-two degree world appears inevitable, and planning for and implementing successful adaptation strategies is critical if agricultural growth is to occur, food security be achieved, and household livelihoods be enhanced. Reliable climate and weather information, at various spatial and temporal scales, is essential in the design of effective strategies for adaptation to climate change and to better our understanding of the critical thresholds in global and regional food systems. Land-based observation and data collection systems in parts of the tropics have been in decline for decades, and the uncertainty in basic information adds considerable difficulty to the quantification and evaluation of impacts and adaptation options. While new technology is being brought to bear on the data issue, much remains to be done. Serious attention also needs to be given to the evaluation of uncertainty and its presentation to decision-makers, in a way that can facilitate the design and implementation of policy-relevant action.
    CLPW04 9th December 2010
    15:30 to 16:10
    D Whitaker Engagement with business- What are the barriers to use of climate data, where should future research be taken?
    Will give details of how financial services use climate models and how uncertainty matters to them. Also discuss how future research might be directed.
    CLPW04 9th December 2010
    16:10 to 17:00
    H Held Climate investments optimized under uncertainty
    While the EU has been propagating the 2° target over the last decade, now also the Copenhagen Accord (2009) recognizes that 'the increase in global temperature should be below 2 degrees Celsius' (compared to pre-industrial levels). In recent years, energy economics have derived welfare-optimal investment streams into low-emission energy mixes and associated costs. According to our analyses, auxiliary targets that are in line with the 2° target could be achieved at relatively low costs if energy investments were triggered rather swiftly.

    While such analyses assume 'perfect foresight' of a benevolent 'social planner', an accompanying suite of experiments explicitly acknowledges the rather uncertain nature of key responses to human decisions within the climate as well as the technology system. We outline first results into that direction and indicate an intrinsic need for generalisation within target approaches under uncertainty.
    CLP 14th December 2010
    10:00 to 11:00
    V Garreta Inference for models with implicit likelihood: a statistical review of data-assimilation and model calibration
    The development of simulations (eg, computer models) involves what is often referred to as "calibration" and "data assimilation," in face, the inference of parameters or fields related to data through the simulator. In statistics, embedding a simulator in the likelihood relating parameters to data, raises an original challenge due to lack of information about the likelihood. For example, when the simulator is deterministic, its, its derivatives are not known. When the simulator is stochastic, it defines a pdf only available through simulations. This situation is in some sense analogous to the inference of a majority of stochastic differential equations (SDE), where, evne if we can simulate from the model, no likelihood is available.

    Many solutions have been proposed to tackle inference problems for computer models and SDE. Mose of them emerged in physics, hydrology, genetics and other areas not directly related to statistics.

    Our objective is, first, to group the various ingerence problems in two problems that are well defined in statistics. We will present two examples based on, (i) the reconstruction of past earthquake history and, (ii) palaeoclimate reconstruction using a vegetation model. Second, we review available inference methods, their weaknesses and potential improvements.

    CLP 15th December 2010
    14:00 to 15:00
    M Haque A system-biology investigation of heat shock protein regulated gene networks: mathematical models, predictions and laboratory experiments
    CLP 21st December 2010
    10:00 to 11:00
    Statistical processing for ensembles of numerical weather prediction model
    University of Cambridge Research Councils UK
        Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons