Key UQ methodologies and motivating applications
Monday 8th January 2018 to Friday 12th January 2018
09:00 to 09:50  Registration  
09:50 to 10:00  Welcome from David Abrahams (INI Director)  
10:00 to 11:00 
Jim Berger Statistical perspectives on UQ, past and present 
INI 1  
11:00 to 11:30  Morning Coffee  
11:30 to 12:30 
Ralph Smith Uncertainty Quantification from a Mathematical Perspective
From both mathematical and statistical perspectives, the fundamental
goal of Uncertainty Quantification (UQ) is to ascertain uncertainties inherent
to parameters, initial and boundary conditions, experimental data, and models
themselves to make predictions with improved and quantified accuracy. Some factors that motivate recent
developments in mathematical UQ analysis include the following. The first is the goal of quantifying
uncertainties for models and applications whose complexity precludes sole
reliance on samplingbased methods. This
includes simulation codes for discretized partial differential equation (PDE)
models, which can require hours to days to run.
Secondly, models are typically nonlinearly parameterized thus requiring
nonlinear statistical analysis. Finally,
there is often emphasis on extrapolatory or outofdata predictions; e.g.,
using timedependent models to predict future events. This requires embedding statistical models
within physical laws, such as conservation relations, to provide the structure
required for extrapolatory predictions.
Within this context, the discussion will focus on techniques to isolate
subsets and subspaces of inputs that are uniquely determined by data. We will also discuss the use of stochastic
collocation and Gaussian process techniques to construct and verify surrogate
models, which can be used for Bayesian inference and subsequent uncertainty
propagation to construct prediction intervals for statistical quantities of
interest. The presentation will conclude
with discussion pertaining to the quantification of model discrepancies in a
manner that preserves physical structures.

INI 1  
12:30 to 13:00  Poster Blitz I  INI 1  
13:00 to 14:00  Lunch @ Churchill College  
14:00 to 15:00 
Bertrand Iooss Uncertainty quantification of numerical experiments: Several issues in industrial applications
In this talk, I will present some recent works of EDF R&D about the uncertainty management of cpuexpensive computer codes (also called numerical models). I will focus on the following topics: visualization tools in uncertainty and sensitivity analysis, functional risk curve estimation via a metamodelbased methodology, sensitivity analysis with dependent inputs using Shapley effects and robustness analysis of quantile estimation by the way of a recent technique based on perturbation of inputs' probability distributions.

INI 1  
15:00 to 16:00 
Ahmed ElSheikh Machine learning techniques for uncertainty quantification of subsurface reservoir models 
INI 1  
16:00 to 16:30  Afternoon Tea  
16:30 to 17:30 
Richard Bradley Philosophical Approaches to Uncertainty and its Measurement
In this talk I present recent work in philosophy and decision theory on the classification, measurement and management on uncertainty.

INI 1  
17:30 to 18:30  Welcome Wine Reception & Poster Session I at INI 
09:00 to 10:00 
Michael Goldstein Uncertainty analysis for complex systems modelled by computer simulators
This talk will present an overview of aspects of a general Bayesian methodology for performing detailed uncertainty analyses for complex physical systems which are modelled by computer simulators. Key features of this methodology are (i) simulator emulation, to allow us to explore the full range of outputs of the simulator (ii) history matching to identify all input choices consistent with historical data, and thus all future outcomes consistent with these choices (iii) structural discrepancy modelling, to make reliable uncertainty statements about the real world.

INI 1  
10:00 to 11:00 
Max Gunzburger Uncertainty quantification for partial differential equations: going beyond Monte Carlo
We
consider the determination of statistical information about outputs of interest
that depend on the solution of a partial differential equation having random
inputs, e.g., coefficients, boundary data, source term, etc. Monte Carlo
methods are the most used approach used for this purpose. We discuss other
approaches that, in some settings, incur far less computational costs. These
include quasiMonte Carlo, polynomial chaos, stochastic collocation, compressed
sensing, reducedorder modeling, and multilevel and multifidelity methods for
all of which we also discuss their relative strengths and weaknesses.

INI 1  
11:00 to 11:30  Morning Coffee  
11:30 to 12:30 
Henry Wynn (London School of Economics) Experimental design in computer experiments: review and recent research
Computer experiments have led to a growth in the development of certain types of experimental design which fill out the input space of a simulator in a comprehensive way: Latin Hypercube Sampling, Sobol sequences and many others. They differ from more traditional factorial experimental designs which have been used typically to fit polynomial response surfaces. Despite this structural difference the principals of good and indeed optimal design still apply as do the tensions between general purpose designs and designs tuned to particular models and utility functions. The talk will be split between the fundamental principles of experimental design as applied to computer experiments and a review of notable methods from the research community. Some attention will be given to designs based on information theoretic principles and the connection to more general theories of learning, where Bayesian principles are particularly useful. 
INI 1  
12:30 to 13:30  Lunch @ Churchill College  
13:30 to 14:30 
Lindsay Lee UQ in earth sciences: applications and challenges
Coauthors: Ken Carslaw (University of Leeds), Carly Reddington (University of Leeds), Kirsty Pringle (University of Leeds), Graham Mann (University of Leeds), Oliver Wild (University of Lancaster), Edmund Ryan (University of Lancaster), Philip Stier (University of Oxford), Duncan WatsonParris (University of Oxford) I will introduce some of the applications of UQ in earth sciences and the challenges remaining that could be addressed during the programme. Earth science models are 3d dynamic models whose CPU demands and data storage often limits the sample size for UQ. We often choose to use averages of the data and dimension reduction to carry out UQ but it is not always clear that the uncertainty quantified is the most useful for uncertainty reduction or increasing confidence in prediction. I will ask whether we should be applying the same techniques to understand and improve the model as those used to reduce uncertainty in predictions showing some examples where the end goal is different. I will look at UQ when constraint or calibration is the goal and how we incorporate uncertainty and use ‘real’ data. This will also raise the question of identifiability in our uncertainty quantification and how to deal with and accurately quantify irreducible uncertainty. Finally, I would like to discuss how we validate our methods in a meaningful way. 
INI 1  
14:30 to 15:30 
Peter Jan van Leeuwen Sequential Bayesian Inference in highdimensional geoscience applications
Applications of sequential Bayesian Inference in
the geosciences, such as atmosphere, ocean, atmospheric chemistry, and landsurface, are characterised by high dimensions, nonlinearity, and complex relations between system variables. While Gaussianbased approximations such as Ensemble Kalman Filters and Smoothers and global variational methods have been used quite extensively in this field, numerous problems ask for methods that can handle strong nonlinearities. In this talk I will discuss recent progress using particle filters. Three main areas of active research in particle filtering can be distinguished, exploring localisation, exploring proposal densities, and exploring (optimal) transportation (and mergers of these ideas are on the horizon). In localisation the idea is to split the highdimensional problem in several smaller problems that then need to be stitched together in a smart way. The first approximate applications of this methodology have just made it to weather prediction, showing the exponentially fast developments here. However, the ‘stitching’ problem remains outstanding. The proposal density methodology discussed next might be fruitful to incorporate here. In the proposal density approach one tries to evolve states in state space such that they obtain very similar weights in the particle filter. Challenges are, of course, the huge dimensions, but these also provide opportunities via the existence of typical sets, which lead to preferred parts of state space for the particles. Recent attempts to exploit typical sets will be discussed. Finally, we will discuss recent progress in (optimal) transportation. The idea here is that a set of prior particles has to be transformed to a set of posterior particles. This is an old problem in optimal transportation. However, the optimality condition poses unnecessary constraints, and by relaxing the optimality constraint we are able to formulate new efficient methods. Specifically, by iteratively minimising the relative entropy between the probability density of the prior particles and the posterior a sequence of transformations emerges for each particle that seems to be tractable even for very high dimensional spaces. A new idea is to explore localisation to obtain a more accurate description of the target posterior, but without the stitching issues mentioned above. So far, model reduction techniques, emulation, and machine learning techniques have been unsuccessful for these highdimensional state estimation problems, but I’m keen to further understand the possibilities and limitations. 
INI 1  
15:30 to 16:00  Afternoon Tea  
16:00 to 17:00 
Jakub Bijak Uncertainty quantification in demography: Challenges and possible solutions
Demography, with its over 350 years of history, is renowned for its empiricism, firm links with statistics, and the lack of a strong theoretical background. The uncertainty of demographic processes is well acknowledged, and over the past three decades, methods have been designed for quantifying the uncertainty in population estimates and forecasts. In parallel, developments in modelbased demographic simulation methods, such as agentbased modelling, have recently offered a promise of shedding some light on the complexity of population processes. However, the existing approaches are still far from fulfilling this promise, and are themselves fraught with epistemological pitfalls. Crucially, complex problems are not analytically tractable with the use of traditional methods. In this talk, I will discuss the potential of uncertainty quantification in bringing together the empirical data, statistical inference and computer simulations, with insights into behavioural and social theory and knowledge of social mechanisms. The discussion will be illustrated by an example of a new research programme on modelbased migration studies.

INI 1 
09:00 to 10:00 
Julia Charrier A few elements of numerical analysis for PDEs with random coefficients of lognormal type
In this talk, we will address some basic issues appearing in the theoretical analysis of numerical methods for PDEs with random coefficients of lognormal type.
To begin with, such problems will be motivated by applications to the study of subsurface flow with uncertainty.
We will then give some results concerning the spatial regularity of solutions of such problems, which of course impacts the error committed in spatial discretization.
We will complete these results with integrability properties to deal with unboundedness of these solutions and then give error bounds for the finite element approximations in adequate norms.
Finally we will discuss the question of the dimensionality, which is crucial for numerical methods such as stochastic collocation.
We will consider the approximation of the random coefficient through a KarhunenLoève expansion, and provide bounds of the resulting error on the solution by highlighting the interest
of the notion of weak error.

INI 1  
10:00 to 11:00 
Chris Oates Bayesian Probabilistic Numerical Methods
In this talk, numerical computation  such as numerical
solution of a PDE – will be treated as an inverse problem in its own right. The
popular Bayesian approach to inversion is considered, wherein a posterior
distribution is induced over the object of interest by conditioning a prior
distribution on the same finite information that would be used in the classical
numerical context. This approach to numerical computation opens the door to
application of statistical techniques, and we discuss the relevance of decision
theory and probabilistic graphical models in particular detail. The concepts
will be briefly illustrated through numerical experiment.

INI 1  
11:00 to 11:30  Morning Coffee  
11:30 to 12:30 
Howard Elman (University of Maryland, College Park) Linear Algebra Methods for ParameterDependent Partial Differential Equations
We discuss some recent developments in solution algorithms for the linear algebra problems that arise from parameterdependent partial differential equations (PDEs). In this setting, there is a need to solve large coupled algebraic systems (which come from stochastic Galerkin methods), or large numbers of standard spatially discrete systems (from Monte Carlo or stochastic collocation methods). The ultimate goal is solutions that represent surrogate approximations that can be evaluated cheaply for multiple values of the parameters, which can be used effectively for simulation or uncertainty quantification. Our focus is on representing parameterized solutions in reducedbasis or lowrank matrix formats. We show that efficient solution algorithms can be built from multigrid methods designed for the underlying discrete PDE, in combination with methods for truncating the ranks of iterates, which reduce both cost and storage requirements of solution algorithms. These ideas can be applied to the systems arising from many ways of treating the parameter spaces, including stochastic Galerkin and collocation. In addition, we present new approaches for solving the dense systems that arise from reducedorder models by preconditioned iterative methods and we show that such approaches can also be combined with empirical interpolation methods to solve the algebraic systems that arise from nonlinear PDEs. 
INI 1  
12:30 to 13:30  Lunch @ Churchill College  
13:30 to 17:00  Free Afternoon  
19:30 to 22:00  Formal Dinner at Christ's College 
09:00 to 10:00 
Markus Bachmayr Lowrank approximations for parametric and random PDEs
The first part of this talk gives an introduction to lowrank tensors as a general tool for approximating highdimensional functions. The second part deals with their application to partial differential equations with many parameters, as they arise in particular in uncertainty quantification problems. Here the focus is on error and complexity guarantees, as well as on lowrank approximability of solutions.

INI 1  
10:00 to 11:00 
Tony O'Hagan Gaussian process emulation
Gaussian process (GP) emulators are a widely used tool in uncertainty quantification (UQ). This talk will define an emulator as a particular kind of surrogate model, then why GPs provide a tractable and flexible basis for constructing emulators, and set out some basic GP theory. There are a number of choices to be made in building a GP emulator, and these are discussed before going on to describe other uses of GPs in UQ and elsewhere.

INI 1  
11:00 to 11:30  Morning Coffee  
11:30 to 12:30 
Masoumeh Dashti The Bayesian approach to inverse problems 
INI 1  
12:30 to 13:30  Lunch @ Churchill College  
13:30 to 14:00  Informal discussion  INI 1  
14:00 to 14:30  Prediscussion session  INI 1  
14:30 to 15:30 
Ronni Bowman (Defence Science and Technology Laboratory); (Southampton Statistical Sciences Research Institute); (University of Liverpool) Uncertainty Quantification (UQ) and Communication for Chemical and Biological Hazard Response
Effectively calculating and communicating uncertainty is
crucial for effective risk management, but is difficult to achieve in
practice. This is compounded when the
application area is highly complex with multiple model fidelities, and “accurate”
UQ is impossible. Uncertainty communication must be clear to experts and nonexperts alike and must account for a lack of understanding of the definitions of both "risk" and "uncertainty". By drawing on examples from the wide variety of defence applications that require an understanding and communication of uncertainty and outlining the reason that uncertainty calculation and communication is crucial to decision making, this talk will explore the current state of the art and outline the many open challenges remaining. The talk will then focus on a particular challenge area and work through the complex information chain with associated timelines to provide an insight into the response times required to support real world scenarios. Content includes material subject to © Crown copyright (2017), Dstl. This material is licensed under the terms of the Open Government Licence except where otherwise stated. To view this licence, visit http://www.nationalarchives.gov.uk/doc/opengovernmentlicence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi[at]nationalarchives.gsi.gov[dot]uk 
INI 1  
15:30 to 16:00  Afternoon Tea  
16:00 to 17:00 
Richard Clayton Computational models of the heart: Why they are useful, and how they would benefit from UQ
Normal and regular beating of the human heart is essential to maintain life. In each beat, as wave of electrical excitation arises in the heart's natural pacemaker, and spreads throughout the rest of the heart. This wave acts as a signal to initialise and synchronise mechanical contraction of the heart tissue, which in turn generates pressure in the chambers of the heart and acts to propel blood around the body. Models have been developed for the electrical and mechanical behaviour of the heart, as well for blood flow. In this talk I will concentrate on models of electrical activation because failures in the initiation and normal propagation of electrical activation can result in a disruption of normal mechanical behaviour, and underlie a range of common heart problems. Models of electrical activation in both single cells and in tissue are stiff, nonlinear, and have a large number of parameters. Until recently there has been little interest in how uncertainty in model parameters and other inputs influences model behaviour. However, the prospect of using these models for critical applications including drug safety testing and guiding interventions in patients has begun to stimulate activity in this area.

INI 1  
17:00 to 18:00  Drinks Reception at INI 
09:00 to 10:00 
Robert Scheichl Multilevel Monte Carlo Methods
Multilevel Monte Carlo (MLMC) is a variance reduction technique for stochastic simulation and Bayesian inference which greatly reduces the computational cost of standard Monte Carlo approaches by employing cheap, coarsescale models with lower fidelity to carry out the bulk of the stochastic simulations, while maintaining the overall accuracy of the fine scale model through a small number of wellchosen high fidelity
simulations.
In this talk, I will first review the ideas behind the approach and discuss a number of applications and extensions that illustrate the generality of the approach. The multilevel Monte Carlo method (in its practical form) has originally been introduced and popularised about 10 years ago by Mike Giles for stochastic differential equations in mathematical finance and has attracted a lot of interest in the context of uncertainty quantification of physical systems modelled by partial differential equations (PDEs). The underlying idea had actually been discovered 10 years earlier in 1998, in an informationtheoretical paper by Stefan Heinrich, but had remained largely unknown until 2008. In recent years, there has been an explosion of activity and its application has been extended, among others, to biological/chemical reaction networks, plasma physics, interacting particle systems as well as to nested simulations.
More importantly for this community, the approach has also been extended to Markov chain Monte Carlo, sequential Monte Carlo and other filtering techniques. In the second part of the talk, I will describe in more detail how the MLMC framework can provide a computationally tractable methodology for Bayesian inference in highdimensional models constrained by PDEs and demonstrate the potential on a toy problem in the context of MetropolisHastings MCMC. Finally, I will finish the talk with some perspectives beyond the classical MLMC framework, in particular using sampledependent model hierarchies and a posteriori error estimators and extending the classical discrete, levelbased approach to a new Continuous Level Monte Carlo method.

INI 1  
10:00 to 11:00 
David Ginsbourger Variations on the Expected Improvement 
INI 1  
11:00 to 11:30  Morning Coffee  
11:30 to 12:30 
Claudia Schillings Analysis of Ensemble Kalman Inversion
The Ensemble Kalman filter (EnKF) has had enormous impact on the applied sciences since its introduction in the 1990s by Evensen and coworkers. It is used for both data assimilation problems, where the objective is to estimate a partially observed timeevolving system, and inverse problems, where the objective is to estimate a (typically distributed) parameter appearing in a differential equation.
In this talk we will focus on the identification of parameters through observations of the response of the system  the inverse problem. The EnKF can be adapted to this setting by introducing artificial dynamics. Despite documented success as a solver for such inverse problems, there is very little analysis of the algorithm. In this talk, we will discuss wellposedness and convergence results of the EnKF based on the continuous time scaling limits, which allow to derive estimates on the longtime behavior of the EnKF and, hence, provide insights into the convergence properties of the algorithm. In particular, we are interested in the properties of the EnKF for a fixed ensemble size. Results from various numerical experiments supporting the theoretical
findings will be presented.
This is joint work with Dirk Bloemker (U Augsburg), Mike Christie (HeriotWatt University), Andrew M. Stuart (Caltech) and Philipp Wacker (FAU ErlangenNuernberg).

INI 1  
12:30 to 13:30  Lunch @ Churchill College  
13:30 to 14:30 
Richard Wilkinson (University of Sheffield) UQ perspectives on approximate Bayesian computation (ABC)
Approximate Bayesian computation (ABC) methods are widely used in some scientific disciplines for fitting stochastic simulators to data. They are primarily used in situations where the likelihood function of the simulator is unknown, but where it is possible to easily sample from the simulator. Methodological development of ABC methods has primarily focused on computational efficiency and tractability, rather than on careful uncertainty modelling. In this talk I'll briefly introduce ABC and its various extensions, and then interpret it from a UQ perspective and suggest how it may best be modified.

INI 1  
14:30 to 15:30 
Catherine Powell ; Peter Challenor Discussion session 
INI 1  
15:30 to 16:00  Afternoon Tea 