skip to content
 

Seminars (UNQ)

Videos and presentation materials from other INI events are also available.

Search seminar archive

Event When Speaker Title Presentation Material
UNQW01 8th January 2018
10:00 to 11:00
Jim Berger Statistical perspectives on UQ, past and present
UNQW01 8th January 2018
11:30 to 12:30
Ralph Smith Uncertainty Quantification from a Mathematical Perspective
From both mathematical and statistical perspectives, the fundamental goal of Uncertainty Quantification (UQ) is to ascertain uncertainties inherent to parameters, initial and boundary conditions, experimental data, and models themselves to make predictions with improved and quantified accuracy.  Some factors that motivate recent developments in mathematical UQ analysis include the following.  The first is the goal of quantifying uncertainties for models and applications whose complexity precludes sole reliance on sampling-based methods.  This includes simulation codes for discretized partial differential equation (PDE) models, which can require hours to days to run.  Secondly, models are typically nonlinearly parameterized thus requiring nonlinear statistical analysis.  Finally, there is often emphasis on extrapolatory or out-of-data predictions; e.g., using time-dependent models to predict future events.  This requires embedding statistical models within physical laws, such as conservation relations, to provide the structure required for extrapolatory predictions.  Within this context, the discussion will focus on techniques to isolate subsets and subspaces of inputs that are uniquely determined by data.  We will also discuss the use of stochastic collocation and Gaussian process techniques to construct and verify surrogate models, which can be used for Bayesian inference and subsequent uncertainty propagation to construct prediction intervals for statistical quantities of interest.  The presentation will conclude with discussion pertaining to the quantification of model discrepancies in a manner that preserves physical structures.
UNQW01 8th January 2018
12:30 to 13:00
Poster Blitz I
UNQW01 8th January 2018
14:00 to 15:00
Bertrand Iooss Uncertainty quantification of numerical experiments: Several issues in industrial applications
In this talk, I will present some recent works of EDF R&D about the uncertainty management of cpu-expensive computer codes (also called numerical models). I will focus on the following topics: visualization tools in uncertainty and sensitivity analysis, functional risk curve estimation via a metamodel-based methodology, sensitivity analysis with dependent inputs using Shapley effects and robustness analysis of quantile estimation by the way of a recent technique based on perturbation of inputs' probability distributions.
UNQW01 8th January 2018
15:00 to 16:00
Ahmed ElSheikh Machine learning techniques for uncertainty quantification of subsurface reservoir models
UNQW01 8th January 2018
16:30 to 17:30
Richard Bradley Philosophical Approaches to Uncertainty and its Measurement
In this talk I present recent work in philosophy and decision theory on the classification, measurement and management on uncertainty.
UNQW01 9th January 2018
09:00 to 10:00
Michael Goldstein Uncertainty analysis for complex systems modelled by computer simulators
This talk will present an overview of aspects of a general Bayesian methodology for performing detailed uncertainty analyses for complex physical systems which are modelled by computer simulators. Key features of this methodology are (i) simulator emulation, to allow us to explore the full range of outputs of the simulator (ii) history matching to identify all input choices consistent with historical data, and thus all future outcomes consistent with these choices (iii) structural discrepancy modelling, to make reliable uncertainty statements about the real world.
UNQW01 9th January 2018
10:00 to 11:00
Max Gunzburger Uncertainty quantification for partial differential equations: going beyond Monte Carlo
We consider the determination of statistical information about outputs of interest that depend on the solution of a partial differential equation having random inputs, e.g., coefficients, boundary data, source term, etc. Monte Carlo methods are the most used approach used for this purpose. We discuss other approaches that, in some settings, incur far less computational costs. These include quasi-Monte Carlo, polynomial chaos, stochastic collocation, compressed sensing, reduced-order modeling, and multi-level and multi-fidelity methods for all of which we also discuss their relative strengths and weaknesses.
UNQW01 9th January 2018
11:30 to 12:30
Henry Wynn Experimental design in computer experiments: review and recent research
Computer experiments have led to a growth in the development of certain types of experimental design which fill out the input space of a simulator in a comprehensive way: Latin Hypercube Sampling, Sobol sequences and many others. They differ from more traditional factorial experimental designs which have been used typically to fit polynomial response surfaces. Despite this structural difference the principals of good and indeed optimal design still apply as do the tensions between general purpose designs and designs tuned to particular models and utility functions.

The talk will be split between the fundamental principles of experimental design as applied to computer experiments and a review of notable methods from the research community. Some attention will be given to designs based on information theoretic principles and the connection to more general theories of learning, where Bayesian principles are particularly useful. 
UNQW01 9th January 2018
13:30 to 14:30
Lindsay Lee UQ in earth sciences: applications and challenges
Co-authors: Ken Carslaw (University of Leeds), Carly Reddington (University of Leeds), Kirsty Pringle (University of Leeds), Graham Mann (University of Leeds), Oliver Wild (University of Lancaster), Edmund Ryan (University of Lancaster), Philip Stier (University of Oxford), Duncan Watson-Parris (University of Oxford)

I will introduce some of the applications of UQ in earth sciences and the challenges remaining that could be addressed during the programme. Earth science models are 3-d dynamic models whose CPU demands and data storage often limits the sample size for UQ. We often choose to use averages of the data and dimension reduction to carry out UQ but it is not always clear that the uncertainty quantified is the most useful for uncertainty reduction or increasing confidence in prediction. I will ask whether we should be applying the same techniques to understand and improve the model as those used to reduce uncertainty in predictions showing some examples where the end goal is different. I will look at UQ when constraint or calibration is the goal and how we incorporate uncertainty and use ‘real’ data. This will also raise the question of identifiability in our uncertainty quantification and how to deal with and accurately quantify irreducible uncertainty. Finally, I would like to discuss how we validate our methods in a meaningful way.
UNQW01 9th January 2018
14:30 to 15:30
Peter Jan van Leeuwen Sequential Bayesian Inference in high-dimensional geoscience applications
Applications of sequential Bayesian Inference in the geosciences, such as atmosphere, ocean, atmospheric chemistry, and
land-surface, are characterised by high dimensions, nonlinearity, and complex relations between system variables.
While Gaussian-based approximations such as Ensemble Kalman Filters and Smoothers and global variational
methods have been used quite extensively in this field, numerous problems ask for methods that can handle
strong nonlinearities. In this talk I will discuss recent progress using particle filters.

Three main areas of active research in particle filtering can be distinguished, exploring localisation,
exploring proposal densities, and exploring (optimal) transportation (and mergers of these ideas are on the
horizon). In localisation the idea is to split the high-dimensional problem in several smaller problems
that then need to be stitched together in a smart way. The first approximate applications of this methodology
have just made it to weather prediction, showing the exponentially fast developments here. However,
the ‘stitching’ problem remains outstanding. The proposal density methodology discussed next might be
fruitful to incorporate here.

In the proposal density approach one tries to evolve states in state space such that they obtain very similar weights
in the particle filter. Challenges are, of course, the huge dimensions, but these also provide opportunities via
the existence of typical sets, which lead to preferred parts of state space for the particles. Recent attempts to exploit
typical sets will be discussed.

Finally, we will discuss recent progress in (optimal) transportation. The idea here is that a set of prior particles
has to be transformed to a set of posterior particles. This is an old problem in optimal transportation. However,
the optimality condition poses unnecessary constraints, and by relaxing the optimality constraint we are able to
formulate new efficient methods. Specifically, by iteratively minimising the relative entropy between the probability
density of the prior particles and the posterior a sequence of transformations emerges for each particle that seems
to be tractable even for very high dimensional spaces. A new idea is to explore localisation to obtain a more
accurate description of the target posterior, but without the stitching issues mentioned above.

So far, model reduction techniques, emulation, and machine learning techniques have been unsuccessful for
these high-dimensional state estimation problems, but I’m keen to further understand the possibilities and limitations.
UNQW01 9th January 2018
16:00 to 17:00
Jakub Bijak Uncertainty quantification in demography: Challenges and possible solutions
Demography, with its over 350 years of history, is renowned for its empiricism, firm links with statistics, and the lack of a strong theoretical background. The uncertainty of demographic processes is well acknowledged, and over the past three decades, methods have been designed for quantifying the uncertainty in population estimates and forecasts. In parallel, developments in model-based demographic simulation methods, such as agent-based modelling, have recently offered a promise of shedding some light on the complexity of population processes. However, the existing approaches are still far from fulfilling this promise, and are themselves fraught with epistemological pitfalls. Crucially, complex problems are not analytically tractable with the use of traditional methods. In this talk, I will discuss the potential of uncertainty quantification in bringing together the empirical data, statistical inference and computer simulations, with insights into behavioural and social theory and knowledge of social mechanisms. The discussion will be illustrated by an example of a new research programme on model-based migration studies.
UNQW01 10th January 2018
09:00 to 10:00
Julia Charrier A few elements of numerical analysis for PDEs with random coefficients of lognormal type
In this talk, we will address some basic issues appearing in the theoretical analysis of numerical methods for PDEs with random coefficients of lognormal type. To begin with, such problems will be motivated by applications to the study of subsurface flow with uncertainty. We will then give some results concerning the spatial regularity of solutions of such problems, which of course impacts the error committed in spatial discretization. We will complete these results with integrability properties to deal with unboundedness of these solutions and then give error bounds for the finite element approximations in adequate norms. Finally we will discuss the question of the dimensionality, which is crucial for numerical methods such as stochastic collocation. We will consider the approximation of the random coefficient through a Karhunen-Loève expansion, and provide bounds of the resulting error on the solution by highlighting the interest of the notion of weak error.
UNQW01 10th January 2018
10:00 to 11:00
Chris Oates Bayesian Probabilistic Numerical Methods
In this talk, numerical computation - such as numerical solution of a PDE – will be treated as an inverse problem in its own right. The popular Bayesian approach to inversion is considered, wherein a posterior distribution is induced over the object of interest by conditioning a prior distribution on the same finite information that would be used in the classical numerical context. This approach to numerical computation opens the door to application of statistical techniques, and we discuss the relevance of decision theory and probabilistic graphical models in particular detail. The concepts will be briefly illustrated through numerical experiment.
UNQW01 10th January 2018
11:30 to 12:30
Howard Elman Linear Algebra Methods for Parameter-Dependent Partial Differential Equations
We discuss some recent developments in solution algorithms for the linear algebra problems that arise from parameter-dependent partial differential equations (PDEs). In this setting, there is a need to solve large coupled algebraic systems (which come from stochastic Galerkin methods), or large numbers of standard spatially discrete systems (from Monte Carlo or stochastic collocation methods). The ultimate goal is solutions that represent surrogate approximations that can be evaluated cheaply for multiple values of the parameters, which can be used effectively for simulation or uncertainty quantification.

Our focus is on representing parameterized solutions in reduced-basis or low-rank matrix formats. We show that efficient solution algorithms can be built from multigrid methods designed for the underlying discrete PDE, in combination with methods for truncating the ranks of iterates, which reduce both cost and storage requirements of solution algorithms. These ideas can be applied to the systems arising from many ways of treating the parameter spaces, including stochastic Galerkin and collocation. In addition, we present new approaches for solving the dense systems that arise from reduced-order models by preconditioned iterative methods and we show that such approaches can also be combined with empirical interpolation methods to solve the algebraic systems that arise from nonlinear PDEs. 
UNQW01 11th January 2018
09:00 to 10:00
Markus Bachmayr Low-rank approximations for parametric and random PDEs
The first part of this talk gives an introduction to low-rank tensors as a general tool for approximating high-dimensional functions. The second part deals with their application to partial differential equations with many parameters, as they arise in particular in uncertainty quantification problems. Here the focus is on error and complexity guarantees, as well as on low-rank approximability of solutions.
UNQW01 11th January 2018
10:00 to 11:00
Tony O'Hagan Gaussian process emulation
Gaussian process (GP) emulators are a widely used tool in uncertainty quantification (UQ). This talk will define an emulator as a particular kind of surrogate model, then why GPs provide a tractable and flexible basis for constructing emulators, and set out some basic GP theory. There are a number of choices to be made in building a GP emulator, and these are discussed before going on to describe other uses of GPs in UQ and elsewhere.
UNQW01 11th January 2018
11:30 to 12:30
Masoumeh Dashti The Bayesian approach to inverse problems
UNQW01 11th January 2018
14:30 to 15:30
Ronni Bowman Uncertainty Quantification (UQ) and Communication for Chemical and Biological Hazard Response
Effectively calculating and communicating uncertainty is crucial for effective risk management, but is difficult to achieve in practice.  This is compounded when the application area is highly complex with multiple model fidelities, and “accurate” UQ is impossible. 

Uncertainty communication must be clear to experts and non-experts alike and must account for a lack of understanding of the definitions of both "risk" and "uncertainty".  By drawing on examples from the wide variety of defence applications that require an understanding and communication of uncertainty and outlining the reason that uncertainty calculation and communication is crucial to decision making, this talk will explore the current state of the art and outline the many open challenges remaining.


The talk will then focus on a particular challenge area and work through the complex information chain with associated timelines to provide an insight into the response times required to support real world scenarios. 

Content includes material subject to © Crown copyright (2017), Dstl. This material is licensed under the terms of the Open Government Licence except where otherwise stated. To view this licence, visit
http://www.nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: psi[at]nationalarchives.gsi.gov[dot]uk
UNQW01 11th January 2018
16:00 to 17:00
Richard Clayton Computational models of the heart: Why they are useful, and how they would benefit from UQ
Normal and regular beating of the human heart is essential to maintain life. In each beat, as wave of electrical excitation arises in the heart's natural pacemaker, and spreads throughout the rest of the heart. This wave acts as a signal to initialise and synchronise mechanical contraction of the heart tissue, which in turn generates pressure in the chambers of the heart and acts to propel blood around the body. Models have been developed for the electrical and mechanical behaviour of the heart, as well for blood flow. In this talk I will concentrate on models of electrical activation because failures in the initiation and normal propagation of electrical activation can result in a disruption of normal mechanical behaviour, and underlie a range of common heart problems. Models of electrical activation in both single cells and in tissue are stiff, nonlinear, and have a large number of parameters. Until recently there has been little interest in how uncertainty in model parameters and other inputs influences model behaviour. However, the prospect of using these models for critical applications including drug safety testing and guiding interventions in patients has begun to stimulate activity in this area.
UNQW01 12th January 2018
09:00 to 10:00
Robert Scheichl Multilevel Monte Carlo Methods
Multilevel Monte Carlo (MLMC) is a variance reduction technique for stochastic simulation and Bayesian inference which greatly reduces the computational cost of standard Monte Carlo approaches by employing cheap, coarse-scale models with lower fidelity to carry out the bulk of the stochastic simulations, while maintaining the overall accuracy of the fine scale model through a small number of well-chosen high fidelity simulations. In this talk, I will first review the ideas behind the approach and discuss a number of applications and extensions that illustrate the generality of the approach. The multilevel Monte Carlo method (in its practical form) has originally been introduced and popularised about 10 years ago by Mike Giles for stochastic differential equations in mathematical finance and has attracted a lot of interest in the context of uncertainty quantification of physical systems modelled by partial differential equations (PDEs). The underlying idea had actually been discovered 10 years earlier in 1998, in an information-theoretical paper by Stefan Heinrich, but had remained largely unknown until 2008. In recent years, there has been an explosion of activity and its application has been extended, among others, to biological/chemical reaction networks, plasma physics, interacting particle systems as well as to nested simulations. More importantly for this community, the approach has also been extended to Markov chain Monte Carlo, sequential Monte Carlo and other filtering techniques. In the second part of the talk, I will describe in more detail how the MLMC framework can provide a computationally tractable methodology for Bayesian inference in high-dimensional models constrained by PDEs and demonstrate the potential on a toy problem in the context of Metropolis-Hastings MCMC. Finally, I will finish the talk with some perspectives beyond the classical MLMC framework, in particular using sample-dependent model hierarchies and a posteriori error estimators and extending the classical discrete, level-based approach to a new Continuous Level Monte Carlo method.
UNQW01 12th January 2018
10:00 to 11:00
David Ginsbourger Variations on the Expected Improvement
UNQW01 12th January 2018
11:30 to 12:30
Claudia Schillings Analysis of Ensemble Kalman Inversion
The Ensemble Kalman filter (EnKF) has had enormous impact on the applied sciences since its introduction in the 1990s by Evensen and coworkers. It is used for both data assimilation problems, where the objective is to estimate a partially observed time-evolving system, and inverse problems, where the objective is to estimate a (typically distributed) parameter appearing in a differential equation. In this talk we will focus on the identification of parameters through observations of the response of the system - the inverse problem. The EnKF can be adapted to this setting by introducing artificial dynamics. Despite documented success as a solver for such inverse problems, there is very little analysis of the algorithm. In this talk, we will discuss well-posedness and convergence results of the EnKF based on the continuous time scaling limits, which allow to derive estimates on the long-time behavior of the EnKF and, hence, provide insights into the convergence properties of the algorithm. In particular, we are interested in the properties of the EnKF for a fixed ensemble size. Results from various numerical experiments supporting the theoretical findings will be presented. This is joint work with Dirk Bloemker (U Augsburg), Mike Christie (Heriot-Watt University), Andrew M. Stuart (Caltech) and Philipp Wacker (FAU Erlangen-Nuernberg).
UNQW01 12th January 2018
13:30 to 14:30
Richard Wilkinson UQ perspectives on approximate Bayesian computation (ABC)
Approximate Bayesian computation (ABC) methods are widely used in some scientific disciplines for fitting stochastic simulators to data. They are primarily used in situations where the likelihood function of the simulator is unknown, but where it is possible to easily sample from the simulator. Methodological development of ABC methods has primarily focused on computational efficiency and tractability, rather than on careful uncertainty modelling. In this talk I'll briefly introduce ABC and its various extensions, and then interpret it from a UQ perspective and suggest how it may best be modified.
UNQ 17th January 2018
11:00 to 13:00
Hermann Matthies Probability paradigms in Uncertainty Quantification
Probability theory was axiomatically built on the concept of measure by A. Kolmogorov in the early 1930s, giving the probability measure and the related integral as primary objects and random variables, i.e. measurable functions, as secondary. Not long after Kolmogorov´s work, developments in operator algebras connected to quantum theory in the early 1940s lead to similar results in an approach where algebras of random variables and the expectation functional are the primary objects. Historically this picks up the view implicitly contained in the early probabilistic theory of the Bernoullis. This algebraic approach allows extensions to more complicated concepts like non-commuting random variables and infinite dimensional function spaces, as it occurs e.g. in quantum field theory, random matrices, and tensor-valued random fields. It not only fully recovers the measure-theoretic approach, but can extend it considerably. For much practical and numerical work, which is often primarily concerned with random variables, expections, and conditioning, it offers an independent theoretical underpinning. In short words, it is “probability without measure theory”. This functional analytic setting has strong connections to the spectral theory of linear operators, where analogies to integration are apparent if they are looked for. These links extend to the concept of weak distribution in a twofold way, which describes probability on infinite dimensional vector spaces. Here the random elements are represented by linear mappings, and factorisations of linear maps are intimately connected with representations and tensor products, as they appear in numerical approximations. This conceptual basis of vector spaces, algebras, linear functionals, and operators gives a fresh view on the concepts of expectation and conditioning, as it occurs in applications of Bayes´s theorem. The problem of Bayesian updating will be sketched in the context of algebras via projections and mappings.



OFBW37 1st February 2018
10:00 to 10:10
Christie Marr, Jane Leeks Welcome and Introduction
OFBW37 1st February 2018
10:10 to 10:50
Peter Challenor Uncertainty Quantification in Complex Models - A Statistical Perspective
OFBW37 1st February 2018
10:50 to 11:30
Catherine Powell An Overview of Stochastic Finite Element Methods for PDE Models with Uncertain Inputs
OFBW37 1st February 2018
11:50 to 12:15
Alexander Karl Uncertainty Management During the Design of Advanced Aero Engines
OFBW37 1st February 2018
12:15 to 12:40
Sanjiv Sharma Challenges of Modelling and Parameter Uncertainties in Aeronautical Applications
OFBW37 1st February 2018
12:40 to 13:00
Questions and Discussion
OFBW37 1st February 2018
14:00 to 14:25
Stephen Jewson Understanding and Mitigating the Impacts of Natural Catastrophes: Models, Uncertainties and Applications
OFBW37 1st February 2018
14:25 to 14:50
Mikhail Derevyanko Modelling Financial Uncertainty for Regulated Insurance Companies: the Case of Equity Risk
OFBW37 1st February 2018
14:50 to 15:10
Questions and Discussion
OFBW37 1st February 2018
15:30 to 15:55
Julian Gunn Uncertainties and Modelling in the Management of Coronary Heart Disease
OFBW37 1st February 2018
15:55 to 16:30
Questions, Discussion and Wrap-Up
UNQW02 5th February 2018
11:30 to 12:30
Catherine Powell Adaptive Stochastic Galerkin Finite Element Approximation for Elliptic PDEs with Random Coefficients
Co-author: Adam Crowder (University of Manchester)

We consider a standard elliptic PDE model with uncertain coefficients. Such models are simple, but are well understood theoretically and so serve as a canonical class of problems on which to compare different numerical schemes (computer models).

Approximations which take the form of polynomial chaos (PC) expansions have been widely used in applied mathematics and can be used as surrogate models in UQ studies. When the coefficients of the approximation are computed using a Galerkin method, we use the term ‘Stochastic Galerkin approximation’. In statistics, the term ‘intrusive PC approximation’ is also often used. In the Galerkin approach, the resulting PC approximation is optimal in that the energy norm of the error between the true model solution and the PC approximation is minimised. This talk will focus on how to build the approximation space (in a computer code) in a computationally efficient way while also guaranteeing accuracy.

In the stochastic Galerkin finite element (SGFEM) approach, an approximation is sought in a space which is defined through a chosen set of spatial finite element basis functions and a set of orthogonal polynomials in the parameters that define the uncertain PDE coefficients. When the number of parameters is too high, the dimension of this space becomes unmanageable. One remedy is to use ‘adaptivity’. First, we generate an approximation in a low-dimensional approximation space (which is cheap) and then use a computable a posteriori error estimator to decide whether the current approximation is accurate enough or not. If not, we enrich the approximation space, estimate the error again, and so on, until the final approximation is accurate enough. This allows us to design problem-specific polynomial approximations. We describe an error estimation procedure, outline the computational costs, and illustrate its use through numerical results. An improved multilevel implem entation will be outlined in a poster given by Adam Crowder.
UNQW02 5th February 2018
13:30 to 14:30
Michael Goldstein Emulation for model discrepancy
Careful assessment of model discrepancy is a crucial aspect of uncertainty quantification. We will discuss the different ways in which emulation may be used to support such assessment, illustrating with practical examples.
UNQW02 5th February 2018
14:30 to 15:30
Ralph Smith Active Subspace Techniques to Construct Surrogate Models for Complex Physical and Biological Models
For many complex physical and biological models, the computational cost of high-fidelity simulation codes precludes their direct use for Bayesian model calibration and uncertainty propagation. For example, the considered neutronics and nuclear thermal hydraulics codes can take hours to days for a single run. Furthermore, the models often have tens to thousands of inputs--comprised of parameters, initial conditions, or boundary conditions--many of which are unidentifiable in the sense that they cannot be uniquely determined using measured responses. In this presentation, we will discuss techniques to isolate influential inputs for subsequent surrogate model construction for Bayesian inference and uncertainty propagation. For input selection, we will discuss advantages and shortcomings of global sensitivity analysis to isolate influential inputs and the use of active subspace construction to determine low-dimensional input manifolds. We will also discuss the manner in which Bayesian calibration on active subspaces can be used to quantify uncertainties in physical parameters. These techniques will be illustrated for models arising in nuclear power plant design, quantum-informed material characterization, and HIV modeling and treatment.
UNQW02 5th February 2018
16:00 to 17:00
Christoph Schwab Domain Uncertainty Quantification
We address the numerical analysis of domain uncertainty in UQ for partial differential and integral equations. For small amplitude shape variation, a first order, kth moment perturbation analysis and sparse tensor discretization produces approximate k-point correlations at near optimal order: work and memory scale log-linearly w.r. to N, the number of degrees of freedom for approximating one instance of the nominal (mean-field) problem [1,3]. For large domain variations, the notion of shape holomorphy of the solution is introduced. It implies (the `usual') sparsity and dimension-independent convergence rates of gpc approximations (e.g., anisotropic stochastic collocation, least squares, CS, ...) of parametric domain-to-solution maps in forward UQ. This property holds for a broad class of smooth elliptic and parabolic boundary value problems. Shape holomorphy also implies sparsity of gpc expansions of certain posteriors in Bayesian inverse UQ [7], [->WS4]. We discuss consequences of gpc sparsity on some surrogate forward models, to be used e.g. in optimization under domain uncertainty [8,9]. We also report on dimension independent convergence rates of Smolyak and higher order Quasi-Monte Carlo integration [5,6,7]. Examples include the usual (anisotropic) diffusion problems, Navier-Stokes [2] and time harmonic Maxwell PDEs [4], and forward UQ for fractional PDEs. Joint work with Jakob Zech (ETH), Albert Cohen (Univ. P. et M. Curie), Carlos Jerez-Hanckes (PUC, Santiago, Chile). Work supported in part by the Swiss National Science Foundation. References: [1] A. Chernov and Ch. Schwab: First order k-th moment finite element analysis of nonlinear operator equations with stochastic data, Mathematics of Computation, 82 (2013), pp. 1859-1888. [2] A. Cohen and Ch. Schwab and J. Zech: Shape Holomorphy of the stationary Navier-Stokes Equations, accepted (2018), SIAM J. Math. Analysis, SAM Report 2016-45. [3] H. Harbrecht and R. Schneider and Ch. Schwab: Sparse Second Moment Analysis for Elliptic Problems in Stochastic Domains, Numerische Mathematik, 109/3 (2008), pp. 385-414. [4] C. Jerez-Hanckes and Ch. Schwab and J. Zech: Electromagnetic Wave Scattering by Random Surfaces: Shape Holomorphy, Math. Mod. Meth. Appl. Sci., 27/12 (2017), pp. 2229-2259. [5] J. Dick and Q. T. Le Gia and Ch. Schwab: Higher order Quasi Monte Carlo integration for holomorphic, parametric operator equations. SIAM Journ. Uncertainty Quantification, 4/1 (2016), pp. 48-79. [6] J. Zech and Ch. Schwab: Convergence rates of high dimensional Smolyak quadrature. In review, SAM Report 2017-27. [7] J. Dick and R. N. Gantner and Q. T. Le Gia and Ch. Schwab: Multilevel higher-order quasi-Monte Carlo Bayesian estimation. Math. Mod. Meth. Appl. Sci., 27/5 (2017), pp. 953-995. [8] P. Chen and Ch. Schwab: Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations. Journal of Computational Physics, 316 (2016), pp. 470-503. [9] Ch. Schwab and J. Zech: Deep Learning in High Dimension. In review, SAM Report 2017-57.
UNQW02 6th February 2018
09:00 to 10:00
Hoang Tran Recovery conditions of compressed sensing approach to uncertainty quantification
Co-author: Clayton Webster (UTK/ORNL). This talk is concerned with the compressed sensing approach to reconstruction of high-dimensional functions from limited amount of data. In this approach, the uniform bounds of the underlying global polynomial bases have often been relied on for the complexity analysis and algorithm development. We prove a new, improved recovery condition without using this uniform boundedness assumption, applicable to multidimensional Legendre approximations. Specifically, our sample complexity is established using the unbounded envelope of all polynomials, thus independent of polynomial subspaces. Some consequent, simple criteria for choosing good random sample sets will also be discussed. In the second part, I will discuss the recovery guarantees of nonconvex optimizations. These minimizations are generally closer to l_0 penalty than l_1 norm, thus it is widely accepted (also demonstrated computationally in UQ) that they are able to enhance the sparsity and accuracy of the approximations. However, the theory proving that nonconvex penalties are as good as or better than l1 minimization in sparse reconstruction has not been available beyond a few specific cases. We aim to fill this gap by establishing new recovery guarantees through unified null space properties that encompass most of the currently proposed nonconvex functionals in the literature, verifying that they are truly superior to l_1.
UNQW02 6th February 2018
10:00 to 11:00
Maurizio Filippone Random Feature Expansions for Deep Gaussian Processes
Drawing meaningful conclusions on the way complex real life phenomena work and being able to predict the behavior of systems of interest require developing accurate and highly interpretable mathematical models whose parameters need to be estimated from observations. In modern applications, however, we are often challenged with the lack of such models, and even when these are available they are too computational demanding to be suitable for standard parameter optimization/inference methods. While probabilistic models based on Deep Gaussian Processes (DGPs) offer attractive tools to tackle these challenges in a principled way and to allow for a sound quantification of uncertainty, carrying out inference for these models poses huge computational challenges that arguably hinder their wide adoption. In this talk, I will present our contribution to the development of practical and scalable inference for DGPs, which can exploit distributed and GPU computing. In particular, I will introduce a formulation of DGPs based on random features that we infer using stochastic variational inference. Through a series of experiments, I will illustrate how our proposal enables scalable deep probabilistic nonparametric modeling and significantly advances the state-of-the-art on inference methods for DGPs.
UNQW02 6th February 2018
11:30 to 12:30
Lorenzo Tamellini Multi-Index Stochastic Collocation (MISC) for Elliptic PDEs with random data
Co-authors: Joakim Beck (KAUST), Abdul-Lateef Haji-Ali (Oxford University), Fabio Nobile (EPFL), Raul Tempone (KAUST)

In this talk we describe the Multi-Index Stochastic Collocation method (MISC) for computing statistics of the solution of an elliptic PDE with random data. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. We propose an optimization procedure to select the most effective mixed differences to include in the MISC estimator: such optimization is a crucial step and allows us to build a method that, provided with sufficient solution regularity, is potentially more effective than other multi-level collocation methods already available in literature. We provide a complexity analysis both in the case of a finite and an infinite number of random variables, showing that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one dimensional problem. We show the effectiveness of MISC with some computational tests, and in particular we discuss how MISC can be efficiently combined with an isogeometric solver for PDE.
UNQW02 6th February 2018
13:30 to 14:30
John Paul Gosling Modelling discontinuities in simulator output using Voronoi tessellations
Co-authors: Chris Pope (University of Leeds), Jill Johnson (University of Leeds), Stuart Barber (University of Leeds), Paul Blackwell (University of Sheffield)

Computationally expensive, complex computer programs are often used to model and predict real-world phenomena. The standard Gaussian process model has a drawback in that the computer code output is assumed to be homogeneous over the input space. Computer codes can behave very differently in various regions of the input space. Here, we introduce a piecewise Gaussian process model to deal with this problem where the input space is divided into separate regions using Voronoi tessellations (also known as Dirichlet tessellations, Thiessen polygons or the dual of the Delaunay triangulation). We demonstrate our method’s utility with an application in climate science.
UNQW02 6th February 2018
14:30 to 15:30
Guannan Zhang A domain-decomposition-based model reduction method for convection-diffusion equations with random coefficients
We focuses on linear steady-state convection-diffusion equations with random-field coefficients. Our particular interest to this effort are two types of partial differential equations (PDEs), i.e., diffusion equations with random diffusivities, and convection-dominated transport equations with random velocity fields. For each of them, we investigate two types of random fields, i.e., the colored noise and the discrete white noise. We developed a new domain-decomposition-based model reduction (DDMR) method, which can exploit the low-dimensional structure of local solutions from various perspectives. We divide the physical domain into a set of non-overlapping sub-domains, generate local random fields and establish the correlation structure among local fields. We generate a set of reduced bases for the PDE solution within sub-domains and on interfaces, then define reduced local stiffness matrices by multiplying each reduced basis by the corresponding blocks of the local stiffness matrix. After that, we establish sparse approximations of the entries of the reduced local stiffness matrices in low-dimensional subspaces, which finishes the offline procedure. In the online phase, when a new realization of the global random field is generated, we map the global random variables to local random variables, evaluate the sparse approximations of the reduced local stiffness matrices, assemble the reduced global Schur complement matrix and solve the coefficients of the reduced bases on interfaces, and then assemble the reduced local Schur complement matrices and solve the coefficients of the reduced bases in the interior of the sub-domains. The advantages and contributions of our method lie in the following three aspects. First, the DDMR method has the online-offline decomposition feature, i.e., the online computational cost is independent of the finite element mesh size. Second, the DDMR method can handle the PDEs of interest with non-affine high-dimensional random coefficients. The challenge caused by the non-affine coefficients is resolved by approximating the entries of the reduced stiffness matrices. The high-dimensionality is handled by the DD strategy. Third, the DDMR method can avoid building polynomial sparse approximations to local PDE solutions. This feature is useful in solving the convection-dominated PDE, whose solution has a sharp transition caused by the boundary condition. We demonstrate the performance of our method based on the diffusion equation and convection-dominated equation with colored noises and discrete white noises.
UNQW02 7th February 2018
09:00 to 10:00
Martin Eigel Aspects of adaptive Galerkin FE for stochastic direct and inverse problems
Co-authors: Max Pfeffer (MPI MIS Leipzig), Manuel Marschall (WIAS Berlin), Reinhold Schneider (TU Berlin)

The Stochastic Galerkin Finite Element Method (SGFEM) is a common approach to numerically solve random PDEs with the aim to obtain a functional representation of the stochastic solution. As with any spectral method, the curse of dimensionality renders the approach challenging when the randomness depends on a large or countable infinite set of parameters. This makes function space adaptation and model reduction strategies a necessity. We review adaptive SGFEM based on reliable a posteriori error estimators for affine and non-affine parametric representations. Based on this, an adaptive explicit sampling-free Bayesian inversion in hierarchical tensor formats can be derived. As an outlook onto current research, a statistical learning viewpoint is presented, which connects concepts of UQ and machine learning from a Variational Monte Carlo perspective.
UNQW02 7th February 2018
10:00 to 11:00
Elaine Spiller Emulators for forecasting and UQ of natural hazards
Geophysical hazards – landslides, tsunamis, volcanic avalanches, etc. – which lead to catastrophic inundation are rare yet devastating events for surrounding communities. The rarity of these events poses two significant challenges. First, there are limited data to inform aleatoric scenario models, how frequent, how big, where. Second, such hazards often follow heavy-tailed distributions resulting in a significant probability that a larger-than-recorded catastrophe might occur. To overcome this second challenge, we must rely on physical models of these hazards to “probe” the tail for these catastrophic events. We will present an emulator-based strategy that allows great speed-up in Monte Carlo simulations for creating probabilistic hazard forecast maps. This approach offers the flexibility to explore both the impacts of epistemic uncertainties on hazard forecasts and of non-stationary scenario modeling on short term forecasts. Collaborators: Jim Berger (Duke), Eliza Calder (Edinburgh), Abani Patra (Buffalo), Bruce Pitman (Buffalo), Regis Rutarindwa (Marquette), Robert Wolpert (Duke)
UNQW02 7th February 2018
11:30 to 12:30
Panel comparisons: Challenor, Ginsbourger, Nobile, Teckentrup and Beck
UNQW02 8th February 2018
09:00 to 10:00
Ben Adcock Polynomial approximation of high-dimensional functions on irregular domains
Co-author: Daan Huybrechs (KU Leuven)

Smooth, multivariate functions defined on tensor domains can be approximated using orthonormal bases formed as tensor products of one-dimensional orthogonal polynomials. On the other hand, constructing orthogonal polynomials in irregular domains is difficult and computationally intensive. Yet irregular domains arise in many applications, including uncertainty quantification, model-order reduction, optimal control and numerical PDEs. In this talk I will introduce a framework for approximating smooth, multivariate functions on irregular domains, known as polynomial frame approximation. Importantly, this approach corresponds to approximation in a frame, rather than a basis; a fact which leads to several key differences, both theoretical and numerical in nature. However, this approach requires no orthogonalization or parametrization of the domain boundary, thus making it suitable for very general domains, including a priori unknown domains. I will discuss theoretical result s for the approximation error, stability and sample complexity of this approach, and show its suitability for high-dimensional approximation through independence (or weak dependence) of the guarantees on the ambient dimension d. I will also present several numerical results, and highlight some open problems and challenges.
UNQW02 8th February 2018
10:00 to 11:00
Christine Shoemaker Deterministic RBF Surrogate Methods for Uncertainty Quantification, Global Optimization and Parallel HPC Applications
Co-author: Antoine Espinet (Cornell University)

This talk will describe general-purpose algorithms for global optimization These algorithms can be used to estimate model parameters to fit complex simulation models to data, to select among alternative options for design or management, or to quantify model uncertainty. In general the numerical results indicate these algorithms do very well in comparison to alternatives, including Gaussian Process based approaches.. Prof. Shoemaker’s group has developed open source (free) PySOT optimization software that is available online (18,000 downloads) . The algorithms can be run in serial or parallel. The focus of the talk will be on SOARS, an Uncertainty Quantification method for using optimization-based sampling to build a surrogate likelihood function followed by additional sampling The algorithms builds a surrogate approximation of the likelihood function based on simulations done during the optimization search. Then MCMC is performed by evaluating the surrogate likelihood function rather than the original expensive-to-evaluate function. Numerical results indicate the SOARS algorithm is very accurate when compared to the posterior densities computed when using the expensive exact likelihood function. I also discuss an application to a model of the underground movement of a plume of geologically sequestered carbon dioxide. The uncertainty in the parameter values obtained from the MCMC analysis on the surrogate likelihood function can be used to assess alternative strategies for identifying a cost-effective plan that will most efficiently give a reliable forecast of a carbon dioxide underground plume. This includes joint work with David Ruppert, Antoine Espinet, Nikolay Bliznyuk, and Yilun Wang.

Related Links
UNQW02 8th February 2018
11:30 to 12:30
Aretha Teckentrup Surrogate models in Bayesian Inverse Problems
Co-authors: Andrew Stuart (Caltech) , Han Cheng Lie and Timm Sullivan (Free University Berlin)

We are interested in the inverse problem of estimating unknown parameters in a mathematical model from observed data. We follow the Bayesian approach, in which the solution to the inverse problem is the probability distribution of the unknown parameters conditioned on the observed data, the so-called posterior distribution. We are particularly interested in the case where the mathematical model is non-linear and expensive to simulate, for example given by a partial differential equation. We consider the use of surrogate models to approximate the Bayesian posterior distribution. We present a general framework for the analysis of the error introduced in the posterior distribution, and discuss particular examples of surrogate models such as Gaussian process emulators and randomised misfit approaches.
UNQW02 8th February 2018
13:30 to 14:30
David Ginsbourger Positive definite kernels for deterministic and stochastic approximations of (invariant) functions
UNQW02 8th February 2018
14:30 to 15:30
Raul Fidel Tempone Uncertainty Quantification with Multi-Level and Multi-Index methods
We start by recalling the Monte Carlo and Multi-level Monte Carlo (MLMC) methods for computing statistics of the solution of a Partial Differential Equation with random data. Then, we present the Multi-Index Monte Carlo (MIMC) and Multi-Index Stochastic Collocation (MISC) methods. MIMC is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the MLMC method first described by Heinrich and Giles. Instead of using first-order differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically, thus yielding improved convergence rates. MISC is a deterministic combination technique that also uses mixed differences to achieve better complexity than MIMC, provided enough regularity. During the presentation, we will showcase the behavior of the numerical methods in applications, some of them arising in the context of Regression based Surrogates and Optimal Experimental Design. Coauthors: J. Beck, L. Espath (KAUST), A.-L. Haji-Ali (Oxford), Q. Long (UT), F. Nobile (EPFL), M. Scavino (UdelaR), L. Tamellini (IMATI), S. Wolfers (KAUST) Webpages: https://stochastic_numerics.kaust.edu.sa https://sri-uq.kaust.edu.sa
UNQW02 8th February 2018
16:00 to 17:00
Maria Adamou Bayesian optimal design for Gaussian process model
Co-author: Dave Woods (University of Southampton)

Data collected from correlated processes arise in many diverse application areas including both computer and physical experiments, and studies in environmental science. Often, such data are used for prediction and optimisation of the process under study. For example, we may wish to construct an emulator of a computationally expensive computer model, or simulator, and then use this emulator to find settings of the controllable variables that maximise the predicted response. The design of the experiment from which the data are collected may strongly influence the quality of the model fit and hence the precision and accuracy of subsequent predictions and decisions. We consider Gaussian process models that are typically defined by a correlation structure that may depend upon unknown parameters. This parametric uncertainty may affect the choice of design points, and ideally should be taken into account when choosing a design. We consider a decision-theoretic Bayesian design for Gaussian process models which is usually computationally challenging as it requires the optimization of an analytically intractable expected loss function over high-dimensional design space. We use a new approximation to the expected loss to find decision-theoretic optimal designs. The resulting designs are illustrated through a number of simple examples.
UNQW02 9th February 2018
09:00 to 10:00
Olivier Roustant Group covariance functions for Gaussian process metamodels with categorical inputs
Co-authors : E. Padonou (Mines Saint-Etienne), Y. Deville (AlpeStat), A. Clément (CEA), G. Perrin (CEA), J. Giorla (CEA) and H. Wynn (LSE).

Gaussian processes (GP) are widely used as metamodels for emulating time-consuming computer codes. We focus on problems involving categorical inputs, with a potentially large number of levels (typically several tens), partitioned in groups of various sizes. Parsimonious group covariance functions can then defined by block covariance matrices with constant correlations between pairs of blocks and within blocks.

In this talk, we first present a formulation of GP models with categorical inputs, which makes a synthesis of existing ones and extends the usual homoscedastic and tensor-product frameworks. Then, we give a parameterization of the block covariance matrix described above, based on a hierarchical Gaussian model. The same model can be used when the assumption within blocks is relaxed, giving a flexible parametric family of valid covariance matrices with constant correlations between pairs of blocks.
We illustrate with an application in nuclear engineering, where one of the categorical inputs is the atomic number in Mendeleev's periodic table and has more than 90 levels.
UNQW02 9th February 2018
10:00 to 11:00
Daniel Williamson Nonstationary Gaussian process emulators with covariance mixtures
Routine diagnostic checking of stationary Gaussian processes fitted to the output of complex computer codes often reveals nonstationary behaviour. There have been a number of approaches, both traditional and more recent, to modelling or accounting for this nonstationarity via the fitted process. These have included the fitting of complex mean functions to attempt to leave a stationary residual process (an idea that is often very difficult to get right in practice), using regression trees or other techniques to partition the input space into regions where different stationary processes are fitted (leading to arbitrary discontinuities in the modelling of the overall process), and other approaches which can be considered to live in one of these camps. In this work we allow the fitted process to be continuous by modelling the covariance kernel as a finite mixture of stationary covariance kernels and allowing the mixture weights to vary appropriately across parameter space. We introduce our method and compare its performance with the leading approaches in the literature for a variety of standard test functions and the cloud parameterisation of the French climate model. This is work led by my final-year PhD student, Victoria Volodina.
UNQW02 9th February 2018
11:30 to 12:30
Oliver Ernst High-Dimensional Collocation for Lognormal Diffusion Problems
Co-authors: Björn Sprungk (Universität Mannheim), Lorenzo Tamellini (IMATI-CNR Pavia) Many UQ models consist of random differential equations in which one or more data components are uncertain and modeled as random variables. When the latter take values in a separable function space, their representation typically requires a large or countably infinite number of random coordinates. Numerical approximation methods for such functions of an infinite number of parameters based on best N-term approximation have recently been proposed and shown to converge at an algebraic rate. Collocation methods have a number of computational advantages over best N-term approximation, and we show how ideas successful there can be used to show a similar convergence rate for sparse collocation of Hilbert-space-valued functions depending on countably many Gaussian random variables. Such functions appear as solutions of elliptic PDEs with a lognormal diffusion coefficient. We outline a general L2-convergence theory based on previous work by Bachmayr et al. and Chen and establish an algebraic convergence rate for sufficiently smooth functions assuming a mild growth bound for the univariate hierarchical surpluses of the interpolation scheme applied to Hermite polynomials. We verify specifically for Gauss-Hermite nodes that this assumption holds and also show algebraic convergence with respect to the resulting number of sparse grid points for this case. Numerical experiments illustrate the dimension-independent convergence rate.
UNQW02 9th February 2018
13:30 to 14:30
Robert Gramacy Replication or exploration? Sequential design for stochastic simulation experiments
We investigate the merits of replication, and provide methods that search for optimal designs (including replicates), in the context of noisy computer simulation experiments. We first show that replication offers the potential to be beneficial from both design and computational perspectives, in the context of Gaussian process surrogate modeling. We then develop a lookahead based sequential design scheme that can determine if a new run should be at an existing input location (i.e., replicate) or at a new one (explore). When paired with a newly developed heteroskedastic Gaussian process model, our dynamic design scheme facilitates learning of signal and noise relationships which can vary throughout the input space. We show that it does so efficiently, on both computational and statistical grounds. In addition to illustrative synthetic examples, we demonstrate performance on two challenging real-data simulation experiments, from inventory management and epidemiology.
UNQW02 9th February 2018
14:30 to 15:30
Future directions panel
UNQ 14th February 2018
11:00 to 13:00
Lorenzo Tamellini Uncertainty Quantification of geochemical and mechanical compaction in layered sedimentary basins
This presentation is joint work of:   Ivo Colombo, Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano, Italy Fabio Nobile, CSQI-MATHICSE, Ecole Polytechnique Fédérale de Lausanne, Switzerland Giovanni Porta, Dipartimento di Ingegneria Civile e Ambientale, Politecnico di Milano, Italy Anna Scotti, MOX, Dipartimento di Matematica, Politecnico di Milano, Italy Lorenzo Tamellini, CNR - Istituto di Matematica Applicata e Tecnologie Informatiche “E. Magenes”, Pavia, Italy     In this work we propose an Uncertainty Quantification methodology for the evolution of sedimentary basins undergoing mechanical and geochemical compaction processes, which we model as a coupled, time-dependent, non-linear, monodimensional (depth-only) system of PDEs with uncertain parameters.   Specifically, we consider multi-layered basins, in which each layer is characterized by a different material. The multi-layered structure gives rise to discontinuities in the dependence of the state variables on the uncertain parameters. Because of these discontinuites, an appropriate treatment is needed for surrogate modeling techniques such as sparse grids to be effective.   To this end, we propose a two-steps methodology which relies on a change of coordinate system to align the discontinuities of the target function within the random parameter space. Once this alignement has been computed, a standard sparse grid approximation of the state variables can be performed. The effectiveness of this procedure is due to the fact that the physical locations of the interfaces among layers feature a smooth dependence on the random parameters and are therefore amenable to sparse grid polynomial approximations.   We showcase the capabilities of our numerical methodologies through some synthetic test cases.  


UNQ 21st February 2018
11:00 to 13:00
Francois-Xavier Briol Bayesian Quadrature for Multiple Related Integrals
Bayesian probabilistic numerical methods are a set of tools providing posterior distributions on the output of numerical methods. The use of these methods is usually motivated by the fact that they can represent our uncertainty due to incomplete/finite information about the continuous mathematical problem being approximated. In this talk, we demonstrate that this paradigm can provide additional advantages, such as the possibility of transferring information between several numerical methods. This allows users to represent uncertainty in a more faithfully manner and, as a by-product, provide increased numerical efficiency. We propose the first such numerical method by extending the well-known Bayesian quadrature algorithm to the case where we are interested in computing the integral of several related functions. We then demonstrate its efficiency in the context of multi-fidelity models for complex engineering systems, as well as a problem of global illumination in computer graphics.



UNQ 26th February 2018
14:00 to 16:00
Henry Wynn Informal talk - Optimum Experimental Design
UNQ 27th February 2018
14:00 to 16:00
Björn Sprungk Metropolis-Hastings algorithms for Bayesian inference in Hilbert spaces
In this talk we consider the Bayesian approach to inverse problems and infer uncertain coefficients in elliptic PDEs given noisy observations of the associated solution. After provinding a short introduction to this approach and illustrating it at a real-world groundwater flow problem, we focus on Metropolis-Hastings (MH) algorithms for approximate sampling of the resulting posterior distribution. These methods used to suffer from a high dimensional state space or a highly concentrated posterior measure, respectively.

In recent years dimension-independent MH algorithms have been developed and analyzed, suitable for Bayesian inference in infinite dimensions. However, the second issue of a concetrated posterior has drawn less attention in the study of MH algorithms yet, despite its importance in application.

We present a MH algorithm well-defined in Hilbert spaces which possesses both desirable properties: a dimension-independent performance as well as a robust behaviour w.r.t. small noise levels in the observational data. Moreover, we show a first analysis of the noise-independence of MH algorithms in terms of the expected acceptance rate and the expected squared jump distance of the resulting Markov chains. Numerical experiments confirm the theoretical results and also indicate that they hold in more general situations than proven.



UNQ 1st March 2018
14:00 to 15:00
Jens Lang Adaptivity in Numerical Methods for ODEs and PDEs
In this talk I will emphasize on the use of adaptive strategies in numerical algorithms to solve systems of ordinary and partial differential equations more efficiently and reliably. After a brief introduction to local and global error control for time integrators general approaches to combine adaptivity in space and time are discussed. Finally, I will speak about recent developments in using adaptive multilevel strategies for PDE-constrained optimization and uncertainty quantification. Throughout my talk I will present numerical results for academic as well as real-life applications including chemical reaction-diffusion systems, regional hyperthermia, electro-cardiology, magneto-quasistatics, glass cooling and complex turbulent flows.



UNQW03 5th March 2018
09:45 to 10:30
Wolfgang Dahmen Parametric PDEs: Sparse Polynomial or Low-Rank Approximation?
We discuss some recent results obtained jointly with Markus Bachmayr and Albert Cohen on the adaptive approximation of parametric solutions for a class of uniformly elliptic parametric PDEs. We briefly review first essential approximability properties of the solution manifold with respect to several approximation types (sparse polynomial expansions, low-rank approximations, hierarchical tensor formats) which then serve as benchmarks for numerical algorithms. We then discuss a fully adaptive algorithmic template with respect to both spatial and parametric variables which can be specified to any of the above approximation types. It completely avoids the inversion of large linear systems and can be shown to achieve any given target accuracy with a certified bound without any a priori assumptions on the solutions. Moreover, the computational complexity (in terms of operation count) can be proven to be optimal (up to uniformly constant factors) for these benchmark classes. That is, it achieves a given target accuracy by keeping the number of adaptively generated degrees of freedom near-minimal at linear computatiuonal cost. We discuss these findings from several perspectives such as: which approximation type is best suited for which problem specification, the role of parametric expansion types, or intrusive versus non-intrusive schemes.
UNQW03 5th March 2018
11:00 to 11:45
Jim Gattiker Complexity Challenges in Uncertainty Quantification for Scientific and Engineering Applications.
Uncertainty Quantification (UQ) is established as an aspect of model-supported inference in scientific and engineering systems. With this expectation comes the desire for addressing increasingly complex applications that challenge the ability of UQ to scale. This talk will describe the view of UQ as a full-system modeling and analysis framework for scientific and engineering models, motivated by the example application of Carbon Capture technology development, and some of the challenging issues that have come up in this multi-level component to full-system modeling effort are discussed, as well as strategies for addressing some of these challenges. Another challenge area is dealing with multivariate responses, and some implications and challenges in this area will be discussed.
UNQW03 5th March 2018
11:45 to 12:30
Leanna House Human-in-the-Loop Analytics: Two Approaches and Two Applications
This will be a two-part talk that presents two applications of human-in-the-loop analytics. The first part takes a traditional approach in eliciting judgement from experts to specify subjective priors in the context of uncertainty quantification of simulators. In particular, the approach applies and verifies a method called Reification (Goldstein and Rougier, 2008), where experts initiate uncertainty analyses by specifying a hypothetical, high-fidelity computer model. With this hypothetical model, we can decompose potential discrepancy between a given simulator and reality. The second part of the talk places experts in the middle of analyses via Bayesian Visual Analytics (House et al., 2015) so that experts may explore data and offer feedback continuously. For BaVA, we use reduced-dimensional visualizations within an interactive software so that experts may communicate their judgements by interacting with data. Based on interactions, we parameterize and specify ``feedback distributions'', rather than prior distributions, for analyses. We exemplify BaVA using a dataset about animals. To conclude, I hope to engage in an open discussion for how we can use BaVA in Uncertainty Quantification of Computer Models.
UNQW03 5th March 2018
14:00 to 14:45
Francisco Alejandro Diaz De la O Reliability-based Sampling for Model Calibration
History Matching is a calibration technique that systematically reduces the input space in a numerical model. At every iteration, an implausibility measure discards combinations of input values that are unlikely to provide a match between model output and experimental observations. As the input space reduces, sampling becomes increasingly challenging due to the size of the relative volume of the non-implausible space and the fact that it can exhibit a complex, disconnected geometry. Since realistic numerical models are computationally expensive, surrogate models and dimensionality reduction are commonly employed. In this talk we will explore how Subset Simulation, a Markov chain Monte Carlo technique from engineering reliability analysis, can solve the sampling problem in History Matching. We will also explore alternative implausibility measures that can guide the selection of regions of the non-implausible in order to balance sampling exploration and exploitation.
UNQW03 5th March 2018
14:45 to 15:30
Tan Bui-Thanh A Triple Model Reduction for Data-Driven Large-Scale Inverse Problems in High Dimensional Parameter Spaces
Co-authors: Ellen Le (The University of Texas At Austin), Aaron Myers (The University of Texas At Austin), Brad Marvin (The University of Texas At Austin), Vishwas Rao (Argone National Lab)

We present an approach to address the challenge of data-driven large-scale inverse problems in high dimensional parameter spaces. The idea is to combine a goal-oriented model reduction approach for state, data-informed/active-subspace reduction for parameter, and randomized misfit approach for data reduction. The method is designed to mitigate the bottle neck of large-scale PDE solve, of high dimensional parameter space exploration, and of ever-increasing volume of data. Various theoretical and numerical results will be presented to support the proposed approach. 
UNQW03 5th March 2018
16:00 to 16:45
Poster Blitz
UNQW03 6th March 2018
09:00 to 09:45
Michael Goldstein Small sample designs for multi-level, multi-output computer simulators
This talk is concerned with some general principles for the design of experiments to learn about computer simulators with many inputs and many outputs which may be evaluated at different levels of accuracy and for which the top level simulator is expensive to evaluate for any input choice. Our aim is to use many evaluations of a fast approximation to the simulator to build an informative prior for the top level simulator. Based on this prior, we may build a small but informative design for the slow simulator. We will illustrate the methodology on a design problem for an oil reservoir simulator. (The work is joint with Jonathan Cumming.)
UNQW03 6th March 2018
09:45 to 10:30
Bruno Sudret Dimensionality reduction and surrogate modelling for high-dimensional UQ problems
In order to predict the behaviour of complex engineering systems (nuclear power plants, aircraft, infrastructure networks, etc.), analysts nowadays develop high-fidelity computational models that try and capture detailed physics. A single run of such simulators can take minutes to hours even on the most advanced HPC architectures. In the context of uncertainty quantification, methods based on Monte Carlo simulation are simply not affordable. This has led to the rapid development of surrogate modelling techniques in the last decade, e.g. polynomial chaos expansions, low-rank tensor representations, Kriging (a.k.a Gaussian process models) among others. Surrogate models have proven remarkable efficiency in the case of moderate dimensionality (e.g. tens to a hundred of inputs). In the case of high-dimensional problems (hundreds to thousands of inputs), or when the input is cast as time series, 2D maps, etc., the classical set-up of surrogate modelling does not apply straightforwardly. Usually, a pre-processing of the data is carried out to reduce this dimensionality, before a surrogate is constructed. In this talk, we show that the sequential use of compression algorithms (for dimensionality reduction (DR), e.g. kernel principal component analysis) and surrogate modelling (SM) is suboptimal. Instead, we propose a new general-purpose framework that cast the two sub-problems into a single DRSM optimization. In this set-up, the parameters of the DR step are selected so that as to maximize the quality of the subsequent surrogate model. The framework is versatile in the sense that the techniques used for DR and for SM can be freely selected and combined. Moreover, the method is purely data-driven. The proposed approach is illustrated on different engineering problems including 1D and 2D elliptical SPDEs and earthquake engineering applications.
UNQW03 6th March 2018
11:00 to 11:45
Jens Lang Adaptive Multilevel Stochastic Collocation Method for Randomized Elliptic PDEs
In this talk, I will present a new adaptive multilevel stochastic collocation method for randomized elliptic PDEs. A hierarchical sequence of adaptive mesh refinements for the spatial approximation is combined with adaptive anisotropic sparse Smolyak grids in the stochastic space in such a way as to minimize computational cost. Iprovide a rigorous analysis for the convergence and computational complexity of the adaptive multilevel algorithm. This is a joint work with Robert Scheichl from Bath.
UNQW03 6th March 2018
11:45 to 12:30
Elisabeth Ullmann Multilevel estimators in Bayesian Inversion and Optimization
Uncertainty quantification (UQ) is a fast growing research area which deals with the impact of parameter, data and model uncertainties in complex systems. We focus on models which are based on partial differential equations (PDEs) with random inputs. For deterministic PDEs there are many classical analytical results and numerical tools available. The treatment of PDEs with random inputs, however, requires novel ideas and tools. We illustrate the mathematical and algorithmic challenges of UQ for Bayesian inverse problems arising from geotechnical engineering and medicine, and an optimal control problem with uncertain PDE constraints.
UNQW03 6th March 2018
14:00 to 14:45
James Hensman Massive scale Gaussian processes with GPflow
In this talk I'll give an overview of how machine learning techniques have been used to scale Gaussian process models to huge datasets. I'll also introduce GPflow, a software library for Gaussian processes that leverages the computational framework TensorFlow, which is more commonly used for deep learning.
UNQW03 6th March 2018
14:45 to 15:30
Abdul Lateef Haji Ali Multilevel Nested Simulation for Efficient Risk Estimation
We investigate the problem of computing a nested expectation of the form P[E[X|Y] >= 0] = E[H(E[X|Y])] where H is the Heaviside function. This nested expectation appears, for example, when estimating the probability of a large loss from a financial portfolio. We present a method that combines the idea of using Multilevel Monte Carlo (MLMC) for nested expectations with the idea of adaptively selecting the number of samples in the approximation of the inner expectation, as proposed by (Broadie et al., 2011). We propose and analyse an algorithm that adaptively selects the number of inner samples on each MLMC level and prove that the resulting MLMC method with adaptive sampling has an order e^-2|log(e)|^2 complexity to achieve a root mean-squared error e. The theoretical analysis is verified by numerical experiments on a simple model problem. Joint work with: Michael B. Giles (University of Oxford)
UNQW03 6th March 2018
16:00 to 16:45
Nathan Urban Multi-model and model structural uncertainty quantification with applications to climate science
A common approach to quantifying the uncertainty in computer model predictions is to calibrate their tuning parameters to observational data. However, the largest uncertainties may not lie in the models' parameters, but rather in their "structures": modelers make different choices in numerical schemes, physics approximations, sub-grid closures, and included processes. These choices result in different models that all claim to represent the same system dynamics, but may disagree in their predictions. This talk is aimed at presenting concepts and motivation concerning how to address such multi-model or structural uncertainty challenges. I present three methods. The first method, Bayesian multi-model combination, converts structural uncertainties in multiple computer models into parametric uncertainties within a reduced model. A hierarchical Bayesian statistical approach combines these parametric uncertainties into a single distribution representing multi-model uncertainty, which can be updated with observational constraints to dynamically bias-correct the multi-model ensemble. The second method uses system identification techniques to learn the governing equations of a PDE system. A non-intrusive model reduction approach is developed to rapidly explore uncertainties in alternate model structures by perturbing the learned dynamics. The third method is aimed at integrated uncertainty problems that require propagating uncertainties through multiple system components. It constructs a Bayesian network or graphical model where each node in the network quantifies uncertainties in a particular physical process, which can be informed by multiple types of model and data.
UNQW03 7th March 2018
09:00 to 09:45
Lars Grasedyck Hierarchical Low Rank Tensors
Co-authors: Sebastian Krämer, Christian Löbbert, Dieter Moser (RWTH Aachen) We introduce the concept of hierarchical low rank decompositions and approximations by use of the hierarchical Tucker format. In order to relate it to several other existing low rank formats we highlight differences, similarities as well as bottlenecks of these. One particularly difficult question is whether or not a tensor or multivariate function allows a priori a low rank representation or approximation. This question can be related to simple matrix decompositions or approximations, but still the question is not easy to answer, cf. the talks of Wolfgang Dahmen, Sergey Dolgov, Martin Stoll and Anthony Nouy. We provide numerical evidence for a model problem that the approximation can be efficient in terms of a small rank. In order to find such a decomposition or approximation we consider black box (cross) type non-intrusive sampling approaches. A special emphasis will be on postprocessing of the tensors, e.g. finding extremal points efficiently. This is of special interest in the context of model reduction and reliability analysis.
UNQW03 7th March 2018
09:45 to 10:30
Jie Chen Linear-Cost Covariance Functions for Gaussian Random Fields
Co-author: Michael L. Stein (University of Chicago)

Gaussian random fields (GRF) are a fundamental stochastic model for spatiotemporal data analysis. An essential ingredient of GRF is the covariance function that characterizes the joint Gaussian distribution of the field. Commonly used covariance functions give rise to fully dense and unstructured covariance matrices, for which required calculations are notoriously expensive to carry out for large data. In this work, we propose a construction of covariance functions that result in matrices with a hierarchical structure. Empowered by matrix algorithms that scale linearly with the matrix dimension, the hierarchical structure is proved to be efficient for a variety of random field computations, including sampling, kriging, and likelihood evaluation. Specifically, with n scattered sites, sampling and likelihood evaluation has an O(n) cost and kriging has an O(logn) cost after preprocessing, particularly favorable for the kriging of an extremely large number of sites (e.g., predict ing on more sites than observed). We demonstrate comprehensive numerical experiments to show the use of the constructed covariance functions and their appealing computation time. Numerical examples on a laptop include simulated data of size up to one million, as well as a climate data product with over two million observations.
UNQW03 7th March 2018
11:00 to 11:45
Sergey Dolgov Low-rank cross approximation algorithms for the solution of stochastic PDEs
Co-authors: Robert Scheichl (University of Bath)

We consider the approximate solution of parametric PDEs using the low-rank Tensor Train (TT) decomposition. Such parametric PDEs arise for example in uncertainty quantification problems in engineering applications. We propose an algorithm that is a hybrid of the alternating least squares and the TT cross methods. It computes a TT approximation of the whole solution, which is particularly beneficial when multiple quantities of interest are sought. The new algorithm exploits and preserves the block diagonal structure of the discretized operator in stochastic collocation schemes. This disentangles computations of the spatial and parametric degrees of freedom in the TT representation. In particular, it only requires solving independent PDEs at a few parameter values, thus allowing the use of existing high performance PDE solvers. We benchmark the new algorithm on the stochastic diffusion equation against quasi-Monte Carlo and dimension-adaptive sparse grids methods. For sufficiently smooth random fields the new approach is orders of magnitude faster.
UNQW03 7th March 2018
11:45 to 12:30
Daniel Williamson Optimal dimension-reduced calibration and the terminal case for spatial models
Since the seminal paper by Kennedy and O’Hagan in 2001, the calibration of computer models using Gaussian process emulators has represented a gold standard for scientists and statisticians working to quantify uncertainty using complex computer codes. When the output of such codes is high dimensional, such as with the spatial fields routinely produced by climate models, the standard approach (attributed to Higdon in 2008) is to take principal components across the model output, and use the loadings on these as a lower dimensional representation of the model output that can be used within the Kennedy and O’Hagan framework. In this talk I will argue that, in general, we should not expect this to work. I will introduce what we term a “terminal case analysis” for general computer model calibration, show the implications for inference of a terminal case analysis and argue that though a high dimensional computer model may not lead to a terminal case analysis, the standard statistical treatment outlined above invariably leads to one artificially. I will then present our solution to this which uses rotation ideas to fix the search directions of our lower dimensional representations so that general calibration of spatio-temporal models is possible. We apply our method to idealised examples and to the output of the state of the art Canadian atmosphere model CanAGCM4. We will see that the problem of calibrating climate models requires a great deal of novel statistical thinking before we, as a community, can claim to have a solution ready for this important application area. This is work done with (and by) Dr James Salter.
UNQW03 8th March 2018
09:00 to 09:45
Gianluigi Rozza Weighted reduced order methods for parametrized PDEs with random inputs
In this talk we discuss a weighted approach for the reduction of parametrized partial differential equations with random input, focusing in particular on weighted approaches based on reduced basis (wRB) [Chen et al., SIAM Journal on Numerical Analysis (2013); D. Torlo et al., submitted (2017)] and proper orthogonal decomposition (wPOD) [L. Venturi et al., submitted (2017)]. We will first present the wPOD approach. A first topic of discussion is related to the choice of samples and respective weights according to a quadrature formula. Moreover, to reduce the computational effort in the offline stage of wPOD, we will employ Smolyak quadrature rules. We will then introduce the wRB method for advection diffusion problems with dominant convection with random input parameters. The issue of the stabilisation of the resulting reduced order model will be discussed in detail. This work is in collaboration with Francesco Ballarin (SISSA, Trieste), Davide Torlo (University of Zurich), and Luca Venturi (Courant Institute for Mathematical Sciences, NYC)
UNQW03 8th March 2018
09:45 to 10:30
Julia Brettschneider Model selection, model frames, and scientific interpretation
Modelling complex systems in engineering, science or social science involves selection of measurements on many levels including observability (determined e.g. by technical equipment, cost, confidentiality, existing records) and need for interpretability. Among the initially selected variables, the frequency and quality of observation may be altered by censoring and sampling biases. A model is, by definition, a simplification, and the question one asks is often not whether a certain effect exists, but whether it matters. This crucially depends on the research objective or perspective. Biased conclusions occur when the research question is interwoven with the mechanisms in which the variables for the analysis are selected or weighted. Such effects can occur in any applications that involve observational data. I will give some examples from a few of my own research projects involving quality assessment, decision making, financial trading, genomics and microscopy.
UNQW03 8th March 2018
11:00 to 11:45
Olga Mula Greedy algorithms for optimal measurements selection in state estimation using reduced models
Co-authors: Peter BINEV (University of South Carolina), Albert COHEN (University Pierre et Marie Curie), James NICHOLS (University Pierre et Marie Curie)

Parametric PDEs of the general form
$$\mathcal{P} (u,a) = 0$$
are commonly used to describe many physical processes, where $\cal P$ is a differential operator, $a$ is a high-dimensional vector of parameters and $u$ is the unknown solution belonging to some Hilbert space $V$. A typical scenario in state estimation is the following: for an unknown parameter $a$, one observes $m$ independent linear measurements of $u(a)$ of the form $\ell_i(u) = (w_i, u), i = 1, ..., m$, where $\ell_i \in V'$ and $w_i$ are the Riesz representers, and we write $W_m = \text{span}\{w_1,...,w_m\}$. The goal is to recover an approximation $u^*$ of $u$ from the measurements. Due to the dependence on a the solutions of the PDE lie in a manifold and the particular PDE structure often allows to derive good approximations of it by linear spaces Vn of moderate dimension n. In this setting, the observed measurements and Vn can be combined to produce an approximation $u^*$ of $u$ up to accuracy
$$
\Vert u -u^* \Vert \leq \beta(V_n, W_m) \text{dist}(u, V_n)
$$
where
$$
\beta(V_n, W_m) := \inf_{v\in V_n} \frac{\Vert P_{W_m} v \Vert}{\Vert v \Vert}
$$
plays the role of a stability constant. For a given $V_n$, one relevant objective is to guarantee that $\beta(V_n, W_m) \geq \gamma >0$ with a number of measurements $m \geq n$ as small as possible. We present results in this direction when the measurement functionals $\ell_i$ belong to a complete dictionary. If time permits, we will also briefly explain ongoing research on how to adapt the reconstruction technique to noisy measurements.

Related Links
UNQW03 8th March 2018
11:45 to 12:30
Martin Stoll Low rank methods for PDE-constrained optimization
Optimization subject to PDE constraints is crucial in many applications . Numerical analysis has contributed a great deal to allow for the efficient solution of these problems and our focus in this talk will be on the solution of the large scale linear systems that represent the first order optimality conditions. We illustrate that these systems, while being of very large dimension, usually contain a lot of mathematical structure. In particular, we focus on low-rank methods that utilize the Kronecker product structure of the system matrices. These methods allow the solution of a time-dependent problem with the storage requirements of a small multiple of the steady problem. Furthermore, this technique can be used to tackle the added dimensionality when we consider optimization problems subject to PDEs with uncertain coefficients. The stochastic Galerkin FEM technique leads to a vast dimensional system that would be infeasible on any computer but using low-rank techniques this can be solved on a standard laptop computer.
UNQW03 8th March 2018
14:00 to 14:45
Boris Kramer Conditional-Value-at-Risk Estimation with Reduced-Order Models
We present two reduced-order model based approaches  for the efficient and accurate evaluation of the Conditional-Value-at-Risk  (CVaR) of quantities of interest (QoI) in engineering systems with uncertain parameters.  CVaR is used to model objective or constraint functions in risk-averse engineering design and optimization applications under uncertainty.  Estimating the CVaR of the QoI is expensive. While the distribution of the uncertain system parameters is known, the resulting QoI is a random variable that is implicitly determined via the state of the system. Evaluating the CVaR of the QoI requires  sampling in the tail of the QoI distribution and typically requires  many solutions of an expensive full-order model of the engineering system. Our reduced-order model approaches substantially reduce this computational expense.
UNQW03 8th March 2018
14:45 to 15:30
Olivier Zahm Certified dimension reduction of the input parameter space of vector-valued functions
Co-authors: Paul Constantine (University of Colorado), Clémentine Prieur (University Joseph Fourier), Youssef Marzouk (MIT)

Approximation of multivariate functions is a difficult task when the number of input parameters is large. Identifying the directions where the function does not significantly vary is a key preprocessing step to reduce the complexity of the approximation algorithms.

Among other dimensionality reduction tools, the active subspace is defined by means of the gradient of a scalar-valued function. It can be interpreted as the subspace in the parameter space where the gradient varies the most. In this talk, we propose a natural extension of the active subspace for vector-valued functions, e.g. functions with multiple scalar-valued outputs or functions taking values in function spaces. Our methodology consists in minimizing an upper-bound of the approximation error obtained using Poincaré-type inequalities.

We also compare the proposed gradient-based approach with the popular and widely used truncated Karhunen-Loève decomposition (KL). We show that, from a theoretical perspective, the truncated KL can be interpreted as a method which minimizes a looser upper bound of the error compared to the one we derived. Also, numerical comparisons show that better dimension reduction can be obtained provided gradients of the function are available. 
UNQW03 8th March 2018
16:00 to 16:45
James Salter Quantifying spatio-temporal boundary condition uncertainty for the deglaciation
Ice sheet models are currently unable to reproduce the retreat of the North American ice sheet through the last deglaciation, due to the large uncertainty in the boundary conditions. To successfully calibrate such a model, it is important to vary both the input parameters and the boundary conditions. These boundary conditions are derived from global climate model simulations, and hence the biases from the output of these models are carried through to the ice sheet output, restricting the range of ice sheet output that is possible. Due to the expense of running global climate models for the required 21,000 years, there are only a small number of such runs available; hence it is difficult to quantify the boundary condition uncertainty. We develop a methodology for generating a range of plausible boundary conditions, using a low-dimensional basis representation for the spatio-temporal input required. We derive this basis by combining key patterns, extracted from a small climate model ensemble of runs through the deglaciation, with sparse spatio-temporal observations. Varying the coefficients for the chosen basis vectors and ice sheet parameters simultaneously, we run ensembles of the ice sheet model. By emulating the ice sheet output, we history match iteratively and rule out combinations of the ice sheet parameters and boundary condition coefficients that lead to implausible deglaciations, reducing the uncertainty due to the boundary conditions.
UNQW03 9th March 2018
09:00 to 09:45
Peter Binev State Estimation in Reduced Modeling
Co-authors: Albert Cohen (University Paris 6), Wolfgang Dahmen (University of South Carolina), Ronald DeVore (Texas A&M University), Guergana Petrova (Texas A&M University), Przemyslaw Wojtaszczyk (University of Warsaw)

We consider the problem of optimal recovery of an element u of a Hilbert space H from measurements of the form l_j(u), j = 1, ... , m, where the l_j are known linear functionals on H. Motivated by reduced modeling for solving parametric partial differential equations, we investigate a setting where the additional information about the solution u is in the form of how well u can be approximated by a certain known subspace V_n of H of dimension n, or more generally, in the form of how well u can be approximated by each of a sequence of nested subspaces V_0, V_1, ... , V_n with each V_k of dimension k. The goal is to exploit additional information derived from the whole hierarchy of spaces rather than only from the largest space V_n. It is shown that, in this multispace case, the set of all u that satisfy the given information can be described as the intersection of a family of known ellipsoidal cylinders in H and that a near optimal recovery algorithm in the multi-space pr oblem is provided by identifying any point in this intersection.
UNQW03 9th March 2018
09:45 to 10:30
Catherine Powell Reduced Basis Solvers for Stochastic Galerkin Matrix Equations
In the applied mathematics community, reduced basis methods are typically used to reduce the computational cost of applying sampling methods to parameter-dependent partial differential equations (PDEs). When dealing with PDE models in particular, repeatedly running computer models (eg finite element solvers) for many choices of the input parameters, is computationally infeasible. The cost of obtaining each sample of the numerical solution is instead sought by projecting the so-called high fidelity problem into a reduced (lower-dimensional) space. The choice of reduced space is crucial in balancing cost and overall accuracy. In this talk, we do not consider sampling methods. Rather, we consider stochastic Galerkin finite element methods (SGFEMs) for parameter-dependent PDEs. Here, the idea is to approximate the solution to the PDE model as a function of the input parameters. We combine finite element approximation in physical space, with global polynomial approximation on the parameter domain. In the statistics community, the term intrusive polynomial chaos approximation is often used. Unlike samping methods, which require the solution of many deterministic problems, SGFEMs yield a single very large linear system of equations with coefficient matrices that have a characteristic Kronecker product structure. By reformulating the systems as multiterm linear matrix equations, we have developed [see: C.E. Powell, D. Silvester, V.Simoncini, An efficient reduced basis solver for stochastic Galerkin matrix equations, SIAM J. Comp. Sci. 39(1), pp A141-A163 (2017)] a memory-efficient solution algorithm which generalizes ideas from rational Krylov subspace approximation (which are known in the linear algebra community). The new approach determines a low-rank approximation to the solution matrix by performing a projection onto a reduced space that is iteratively augmented with problem-specific basis vectors. Crucially, it requires far less memory than standard iterative methods applied to the Kronecker formulation of the linear systems. For test problems consisting of elliptic PDEs, and indefinite problems with saddle point structure, we are able to solve systems of billions of equations on a standard desktop computer quickly and efficiently.
UNQW03 9th March 2018
11:00 to 11:45
Benjamin Peherstorfer Multifidelity Monte Carlo estimation with adaptive low-fidelity models
Multifidelity Monte Carlo (MFMC) estimation combines low- and high-fidelity models to speedup the estimation of statistics of the high-fidelity model outputs. MFMC optimally samples the low- and high-fidelity models such that the MFMC estimator has minimal mean-squared error for a given computational budget. In the setup of MFMC, the low-fidelity models are static, i.e., they are given and fixed and cannot be changed and adapted. We introduce the adaptive MFMC (AMFMC) method that splits the computational budget between adapting the low-fidelity models to improve their approximation quality and sampling the low- and high-fidelity models to reduce the mean-squared error of the estimator. Our AMFMC approach derives the quasi-optimal balance between adaptation and sampling in the sense that our approach minimizes an upper bound of the mean-squared error, instead of the error directly. We show that the quasi-optimal number of adaptations of the low-fidelity models is bounded even in the limit case that an infinite budget is available. This shows that adapting low-fidelity models in MFMC beyond a certain approximation accuracy is unnecessary and can even be wasteful. Our AMFMC approach trades-off adaptation and sampling and so avoids over-adaptation of the low-fidelity models. Besides the costs of adapting low-fidelity models, our AMFMC approach can also take into account the costs of the initial construction of the low-fidelity models (``offline costs''), which is critical if low-fidelity models are computationally expensive to build such as reduced models and data-fit surrogate models. Numerical results demonstrate that our adaptive approach can achieve orders of magnitude speedups compared to MFMC estimators with static low-fidelity models and compared to Monte Carlo estimators that use the high-fidelity model alone.
UNQW03 9th March 2018
11:45 to 12:30
Anthony Nouy Principal component analysis for learning tree tensor networks
We present an extension of principal component analysis for functions of multiple random variables and an associated algorithm for the approximation of such functions using tree-based low-rank formats (tree tensor networks). A multivariate function is here considered as an element of a Hilbert tensor space of functions defined on a product set equipped with a probability measure. The algorithm only requires evaluations of functions on a structured set of points which is constructed adaptively. The algorithm constructs a hierarchy of subspaces associated with the different nodes of a dimension partition tree and a corresponding hierarchy of projection operators, based on interpolation or least-squares projection. Optimal subspaces are estimated using empirical principal component analysis of interpolations of partial random evaluations of the function. The algorithm is able to provide an approximation in any tree-based format with either a prescribed rank or a prescribed relative error, with a number of evaluations of the order of the storage complexity of the approximation format.
UNQ 16th March 2018
11:00 to 13:00
Andy Wiltshire Constraining carbon emissions pathways towards Paris climate targets
UNQ 28th March 2018
11:00 to 13:00
David Woods Bayesian optimal design for ordinary differential equation models with application in biological science
Bayesian optimal design is considered for experiments where the response distribution depends on the solution to a system of non-linear ordinary differential equations. The motivation is an experiment to estimate parameters in the equations governing the transport of amino acids through cell membranes in human placentas. Decision-theoretic Bayesian design of experiments for such nonlinear models is conceptually very attractive, allowing the formal incorporation of prior knowledge to overcome the parameter dependence of frequentist design and being less reliant on asymptotic approximations. However, the necessary approximation and maximization of the, typically analytically intractable, expected utility results in a computationally challenging problem. These issues are further exacerbated if the solution to the differential equations is not available in closed-form. This paper proposes a new combination of a probabilistic solution to the equations embedded within a Monte Carlo approximation to the expected utility with cyclic descent of a smooth approximation to find the optimal design. A novel precomputation algorithm reduces the computational burden, making the search for an optimal design feasible for bigger problems. The methods are demonstrated by finding new designs for a number of common models derived from differential equations, and by providing optimal designs for the placenta experiment.

Joint work with Antony Overstall and Ben Parker (University of Southampton)
UNQ 5th April 2018
11:00 to 12:00
Ines Cecilio UQ methodologies in Schlumberger's technology development for Drilling Automation
In this talk, I will explain how uncertainty quantification methodologies are playing a vital role at Schlumberger Cambridge Research for the modernization, increased safety and efficiency in drilling oil and gas wells. There are intrinsic uncertainties while drilling a well which carry risks with significant economic and HSE impact. I will present some solutions based on techniques such as sequential Monte Carlo, Bayesian networks and Gaussian processes which we developed for the prevention and mitigation of some risks as well as enabling automation.



UNQ 5th April 2018
16:00 to 17:00
Andrew Stuart Rothschild Lecture: The Legacy of Rudolph Kalman
In 1960 Rudolph Kalman published what is arguably the first paper to develop a systematic, principled approach to the use of data to improve the predictive capability of  the mathematical models developed to understand the world around us. As our ability to gather data grows at an enormous rate, the importance of this work continues to grow too. The lecture will describe this paper and developments that have stemmed from it, revolutionizing fields such space-craft control, weather prediction, oceanography, oil recovery, medical imaging and artificial intelligence. Some mathematical details will be also provided, but limited to simple concepts such as optimization and iteration; the talk is designed to be broadly accessible to anyone with an interest in quantitative science.
UNQW04 9th April 2018
10:00 to 11:00
Ilya Mandel Studying black holes with gravitational waves: Why GW astronomy needs you!
Following the first direct observation of gravitational waves from a pair of merging black holes in September 2015, we are now entering the era of gravitational-wave astronomy, where gravitational waves are increasingly being used as a tool to explore topics ranging from astronomy (stellar and binary evolution) to fundamental physics (tests of the general theory of relativity). Future progress depends on addressing several key problems in statistical inference on gravitational-wave observations, including (i) rapidly growing computational cost for future instruments; (ii) noise characterisation; (iii) model systematics; and (iv) model selection.
UNQW04 9th April 2018
11:30 to 12:00
Paul Constantine Subspace-based dimension reduction for forward and inverse uncertainty quantification
Many methods in uncertainty quantification suffer from the curse of dimensionality. I will discuss several approaches for identifying exploitable low-dimensional structure---e.g., active subspaces or likelihood-informed subspaces---that enable otherwise infeasible forward and inverse uncertainty quantification.

Related Links
UNQW04 9th April 2018
13:30 to 14:30
Christoph Schwab Deterministic Multilevel Methods for Forward and Inverse UQ in PDEs
We present the numerical analysis of Quasi Monte-Carlo methods for high-dimensional integration applied to forward and inverse uncertainty quantification for elliptic and parabolic PDEs. Emphasis will be placed on the role of parametric holomorphy of data-to-solution maps. We present corresponding results on deterministic quadratures in Bayesian Inversion of parametric PDEs, and the related bound on posterior sparsity and (dimension-independent) QMC convergence rates. Particular attention will be placed on Higher-Order QMC, and on the interplay between the structure of the representation system of the distributed uncertain input data (KL, splines, wavelets,...) and the structure of QMC weights. We also review stable and efficient generation of interlaced polynomial lattice rules, and the numerical analysis of multilevel QMC Finite Element PDE discretizations with applications to forward and inverse computational uncertainty quantification. QMC convergence rates will be compared with those afforded by Smolyak quadrature. Joint work with Robert Gantner and Lukas Herrmann and Jakob Zech (SAM, ETH) and Josef Dick, Thong LeGia and Frances Kuo (Sydney). References: [1] R. N. Gantner and L. Herrmann and Ch. Schwab Quasi-Monte Carlo integration for affine-parametric, elliptic PDEs: local supports and product weights, SIAM J. Numer. Analysis, 56/1 (2018), pp. 111-135. [2] J. Dick and R. N. Gantner and Q. T. Le Gia and Ch. Schwab Multilevel higher-order quasi-Monte Carlo Bayesian estimation, Math. Mod. Meth. Appl. Sci., 27/5 (2017), pp. 953-995. [3] R. N. Gantner and M. D. Peters Higher Order Quasi-Monte Carlo for Bayesian Shape Inversion, accepted (2018) SINUM, SAM Report 2016-42. [4] J. Dick and Q. T. Le Gia and Ch. Schwab Higher order Quasi Monte Carlo integration for holomorphic, parametric operator equations, SIAM Journ. Uncertainty Quantification, 4/1 (2016), pp. 48-79 [5] J. Dick and F.Y. Kuo and Q.T. LeGia and Ch. Schwab Multi-level higher order QMC Galerkin discretization for affine parametric operator equations, SIAM J. Numer. Anal., 54/4 (2016), pp. 2541-2568
UNQW04 9th April 2018
14:30 to 15:00
Richard Nickl Statistical guarantees for Bayesian uncertainty quantification in inverse problems
We discuss recent results in mathematical statistics that provide objective statistical guarantees for Bayesian algorithms in (possibly non-linear) noisy inverse problems. We focus in particular on the justification of Bayesian credible sets as proper frequentist confidence sets in the small noise limit via so-called `Bernstein - von Mises theorems', which provide Gaussian approximations to the posterior distribution, and introduce notions of such theorems in the infinite-dimensional settings relevant for inverse problems. We discuss in detail such a Bernstein-von Mises result for Bayesian inference on the unknown potential in the Schroedinger equation from an observation of the solution of that PDE corrupted by additive Gaussian white noise. See https://arxiv.org/abs/1707.01764 and also https://arxiv.org/abs/1708.06332
UNQW04 9th April 2018
15:00 to 15:30
Robert Scheichl Low-rank tensor approximation for sampling high dimensional distributions
High-dimensional distributions are notoriously difficult to sample from, particularly in the context of PDE-constrained inverse problems. In this talk, we will present general purpose samplers based on low-rank tensor surrogates in the tensor-train (TT) format, a methodology that has been exploited already for many years for scalable, high-dimensional function approximations in quantum chemistry. In the Bayesian context, the TT surrogate is built in a two stage process. First we build a surrogate of the entire PDE solution in the TT format, using a novel combination of alternating least squares and the TT cross algorithm. It exploits and preserves the block diagonal structure of the discretised operator in stochastic collocation schemes, requiring only independent PDE solutions at a few parameter values, thus allowing the use of existing high performance PDE solvers. In a second stage, we approximate the high-dimensional posterior density function also in TT format. Due to the particular structure of the TT surrogate, we can build an efficient conditional distribution method (or Rosenblatt transform) that only requires a sampling algorithm for one-dimensional conditionals. This conditional distribution method can also be used for other high-dimensional distributions, not necessarily coming from a PDE-constrained inverse problem. The overall computational cost and storage requirements of the sampler grow linearly with the dimension. For sufficiently smooth distributions, the ranks required for accurate TT approximations are moderate, leading to significant computational gains. We compare our new sampling method with established methods, such as the delayed rejection adaptive Metropolis (DRAM) algorithm, as well as with multilevel quasi-Monte Carlo ratio estimators. This is joint work with Sergey Dolgov (Bath), Colin Fox (Otago) and Karim Anaya-Izquierdo (Bath).
UNQW04 10th April 2018
09:00 to 10:00
Youssef Marzouk Optimal Bayesian experimental design: focused objectives and observation selection strategies
I will discuss two complementary efforts in Bayesian optimal experimental design for inverse problems. The first focuses on evaluating an experimental design objective: we describe a new computational approach for ``focused'' optimal Bayesian experimental design with nonlinear models, with the goal of maximizing expected information gain in targeted subsets of model parameters. Our approach considers uncertainty in the full set of model parameters, but employs a design objective that can exploit learning trade-offs among different parameter subsets. We introduce a layered multiple importance sampling scheme that provides consistent estimates of expected information gain in this focused setting, with significant reductions in estimator bias and variance for a given computational effort. The second effort focuses on optimization of information theoretic design objectives---in particular, from the combinatorial perspective of observation selection. Given many potential experiments, one may wish to choose a most informative subset thereof. Even if the data have in principle been collected, practical constraints on storage, communication, and computational costs may limit the number of observations that one wishes to employ. We introduce methods for selecting near-optimal subsets of the data under cardinality constraints. Our methods exploit the structure of linear inverse problems in the Bayesian setting, and can be efficiently implemented using low-rank approximations and greedy strategies based on modular bounds. This is joint work with Chi Feng and Jayanth Jagalur-Mohan.
UNQW04 10th April 2018
10:00 to 10:30
Claudia Schillings On the Convergence of Laplace's Approximation and Its Implications for Bayesian Computation
Sampling methods for Bayesian inference show numerical instabilities in the case of concentrated posterior distributions. However, the concentration of the posterior is a highly desirable situation in practice, since it relates to informative or large data. In this talk, we will discuss convergence results of Laplace’s approximation and analyze the use of the approximation within sampling methods. This is joint work with Bjoern Sprungk (U Goettingen) and Philipp Wacker (FAU Erlangen).
UNQW04 10th April 2018
11:00 to 11:30
Hugo Maruri-Aguilar Smooth metamodels
Smooth supersaturated polynomials (Bates et al., 2014) have been proposed for emulating computer experiments. These models are simple to interpret and have spline-like properties inherited from the Sobolev-type of smoothing which is at the core of the method. An implementation of these models is available in the R package ssm. This talk aims to describe the method as well as discuss designs that could be performed with the help of smooth models. To illustrate the methodology, we use data from the fan blade assembly. This is joint work with H Wynn, R Bates and P Curtis.
UNQW04 10th April 2018
11:30 to 12:00
Michael Goldstein Inverting the Pareto Boundary: Bayes linear decision support with a soft constraint
We consider problems of decision support based around computer simulators, where we must take into account a soft constraint on our decision choices. This leads to the problem of identifying and inverting the Pareto boundary for the decision. We show how Bayes linear methods may be used for this purpose and how the sensitivity of the decision choices may be quantified and explored. The approach is illustrated with a problem on wind farm construction. This is joint work with Hailiang Du.
UNQW04 10th April 2018
13:30 to 14:30
Andrew Stuart Large Graph Limits of Learning Algorithms
Many problems in machine learning require the classification of high dimensional data. One methodology to approach such problems is to construct a graph whose vertices are identified with data points, with edges weighted according to some measure of affinity between the data points. Algorithms such as spectral clustering, probit classification and the Bayesian level set method can all be applied in this setting. The goal of the talk is to describe these algorithms for classification, and analyze them in the limit of large data sets. Doing so leads to interesting problems in the calculus of variations, Bayesian inverse problems and in Monte Carlo Markov Chain, all of which will be highlighted in the talk. These limiting problems give insight into the structure of the classification problem, and algorithms for it.    

Collaboration with:  
Andrea Bertozzi (UCLA)
Michael Luo (UCLA)
Kostas Zygalakis (Edinburgh)
https://arxiv.org/abs/1703.08816  
and  
Matt Dunlop (Caltech)
Dejan Slepcev (CMU)
Matt Thorpe (Cambridge)
(forthcoming paper)
UNQW04 10th April 2018
15:00 to 16:00
Tim Sullivan Bayesian probabilistic numerical methods
In this work, numerical computation - such as numerical solution of a PDE - is treated as a statistical inverse problem in its own right. The popular Bayesian approach to inversion is considered, wherein a posterior distribution is induced over the object of interest by conditioning a prior distribution on the same finite information that would be used in a classical numerical method. The main technical consideration is that the data in this context are non-random and thus the standard Bayes' theorem does not hold. General conditions will be presented under which such Bayesian probabilistic numerical methods are well-posed, and a sequential Monte-Carlo method will be shown to provide consistent estimation of the posterior. The paradigm is extended to computational ``pipelines'', through which a distributional quantification of numerical error can be propagated. A sufficient condition is presented for when such propagation can be endowed with a globally coherent Bayesian interpretation, based on a novel class of probabilistic graphical models designed to represent a computational work-flow. The concepts are illustrated through explicit numerical experiments involving both linear and non-linear PDE models. This is joint work with Jon Cockayne, Chris Oates, and Mark Girolami. Further details are available in the preprint arXiv:1702.03673.
UNQW04 11th April 2018
09:00 to 10:00
Angela Dean Experimental Design for Prediction of Physical System Means Using Calibrated Computer Simulators
Computer experiments using deterministic simulators are often used to supplement physical system experiments. A common problem is that a computer simulator may provide biased output for the physical process due to the simplified physics or biology used in the mathematical model. However, when physical observations are available, it may be possible to use these data to align the simulator output to be close to the true mean response by constructing a bias-corrected predictor (a process called calibration). This talk looks at two aspects of experimental design for prediction of physical system means using a Bayesian calibrated predictor. First, the empirical prediction accuracy over the output space of several different types of combined physical and simulator designs is discussed. In particular, designs constructed using the integrated mean squared prediction error seem to perform well. Secondly, a sequential design methodology for optimizing a physical manufacturing process when there are multiple, competing product objectives is described. The goal is to identify a set of manufacturing conditions each of which leads to outputs on the Pareto Front of the product objectives, i.e. identify manufacturing conditions which cannot be modified to improve all the product objectives simultaneously. A sequential design methodology which maximizes the posterior expected minimax fitness function is used to add data from either the simulator or the manufacturing process. The method is illustrated with an example from an injection molding study. The presentation is based on joint work with Thomas Santner, Erin Leatherman, and Po-Hsu Allen Chen.
UNQW04 11th April 2018
10:00 to 10:30
Peter Challenor Experimental Design for Inverse Modelling: From Real to Virtual and Back
Inverse modelling requires both observations in the real world as well as runs of the computer model. As our inverse modelling method we look at history matching which uses waves of computer model runs to find areas of input space where the model is implausible and thus can be ruled out. However if we reduce the uncertainty on our observations we also rule out additional space. Given the relative costs of model runs and real world observations can we find a method of deciding which is best to do next? Using an example in c radiology we examine the interplay between taking real world observations and running additional computer experiments and explore some possible strategies. .
UNQW04 11th April 2018
11:00 to 11:30
David Ginsbourger Quantifying and reducing uncertainties on sets under Gaussian Process priors
Gaussian Process models have been used in a number of problems where an objective function f needs to be studied based on a drastically limited number of evaluations. Global optimization algorithms based on Gaussian Process models have been investigated for several decades, and have become quite popular notably in design of computer experiments. Also, further classes of problems involving the estimation of sets implicitly defined by f, e.g. sets of excursion above a given threshold, have inspired multiple research developments. In this talk, we will give an overview of recent results and challenges pertaining to the estimation of sets under Gaussian Process priors, with a particular interest for to the quantification and the sequential reduction of associated uncertainties. Based on a series of joint works primarily with Dario Azzimonti, François Bachoc, Julien Bect, Mickaël Binois, Clément Chevalier, Ilya Molchanov, Victor Picheny, Yann Richet and Emmanuel Vazquez.
UNQW04 11th April 2018
11:30 to 12:00
Daniel Williamson Parameter inference, model error and the goals of calibration
I have some data, a mathematical model describing a process in the real world that produced that data and I would like to learn something about the real world. We would typically formulate this as an inverse problem and apply our favourite techniques for solving it (e.g. Bayesian calibration or history matching), ultimately providing inference for those parameters in our mathematical model that are consistent with the data. Does this make sense? In this talk, I will use climate science as a lens through which we can look at how mathematical models are viewed and treated by the scientific community, and consider UQ approaches to inverse problems and how they might fit and ask whether it matters if they don't.
UNQW04 12th April 2018
09:00 to 10:00
Jeremy Oakley Bayesian calibration, history matching and model discrepancy
Bayesian calibration and history matching are both well established tools for solving inverse problems: finding model inputs to make model outputs match observed data as closely as possible. I will discuss and compare both, within the context of decision-making. I will discuss the sometimes contentious issue of model discrepancy: how and whether we might account for an imperfect or misspecified model within the inference procedure. I will also present some work on history matching of a high dimensional individual based HIV transmission model (joint work with I. Andrianakis, N. McCreesh, I. Vernon, T. J. McKinley, R. N. Nsubuga, M. Goldstein and R. J. White).
UNQW04 12th April 2018
10:00 to 10:30
Masoumeh Dashti Modes of posterior measure for Bayesian inverse problems with a class of non-Gaussian priors
We consider the inverse problem of recovering an unknown functional parameter from noisy and indirect observations. We adopt a Bayesian approach and, for a non-smooth, non-Gaussian and sparsity-promoting class of prior measures, show that maximum a posteriori (MAP) estimates are characterized by the minimizers of a generalized Onsager-Machlup functional of the posterior. We also discuss some posterior consistency results. This is based on joint works with S. Agapiou, M.Burger and T. Helin.
UNQW04 12th April 2018
11:00 to 11:30
Nicholas Dexter Joint-sparse recovery for high-dimensional parametric PDEs
Co-authors: Hoang Tran (Oak Ridge National Laboratory) & Clayton Webster (University of Tennessee & Oak Ridge National Laboratory) We present and analyze a novel sparse polynomial approximation method for the solution of PDEs with stochastic and parametric inputs. Our approach treats the parameterized problem as a problem of joint-sparse signal recovery, i.e., simultaneous reconstruction of a set of sparse signals, sharing a common sparsity pattern, from a countable, possibly infinite, set of measurements. In this setting, the support set of the signal is assumed to be unknown and the measurements may be corrupted by noise. We propose the solution of a linear inverse problem via convex sparse regularization for an approximation to the true signal. Our approach allows for global approximations of the solution over both physical and parametric domains. In addition, we show that the method enjoys the minimal sample complexity requirements common to compressed sensing-based approaches. We then perform extensive numerical experiments on several high-dimensional parameterized elliptic PDE models to demonstrate the recovery properties of the proposed approach.
UNQW04 12th April 2018
11:30 to 12:00
Aretha Teckentrup Deep Gaussian Process Priors for Bayesian Inverse Problems
Co-authors: Matt Dunlop (Caltech), Mark Girolami (Imperial College), Andrew Stuart (Caltech)

Deep Gaussian processes have received a great deal of attention in the last couple of years, due to their ability to model very complex behaviour. In this talk, we present a general framework for constructing deep Gaussian processes, and provide a mathematical argument for why the depth of the processes is in most cases finite. We also present some numerical experiments, where deep Gaussian processes have been employed as prior distributions in Bayesian inverse problems.

Related Links
UNQW04 12th April 2018
13:30 to 14:30
Derek Bingham Bayesian model calibration for generalized linear models: An application in radiation transport
Co-author: Mike Grosskopf (Los Alamos National Lab)

Model calibration uses outputs from a simulator and field data to build a predictive model for the physical system and to estimate unknown inputs. The conventional approach to model calibration assumes that the observations are continuous outcomes. In many applications this is not the case. The methodology proposed was motivated by an application in modeling photon counts at the Center for Exascale Radiation Transport. There, high performance computing is used for simulating the flow of neutrons through various materials. In this talk, new Bayesian methodology for computer model calibration to handle the count structure of our observed data allows closer fidelity to the experimental system and provides flexibility for identifying different forms of model discrepancy between the simulator and experiment.

UNQW04 12th April 2018
14:30 to 15:00
Ian Vernon Multilevel Emulation and History Matching of EAGLE: an expensive hydrodynamical Galaxy formation simulation.
We discuss strategies for performing Bayesian uncertainty analyses for extremely expensive simulators. The EAGLE model is one of the most (arguably the most) complex hydrodynamical Galaxy formation simulations yet performed. It is however extremely expensive, currently taking approximately 5 million hours of CPU time, with order of magnitude increases in runtime planned. This makes a full uncertainty analysis involving the exploration of multiple input parameters along with several additional uncertainty assessments, seemingly impossible. We present a strategy for the resolution of this problem, which incorporates four versions of the EAGLE model, of varying speed and accuracy, within a specific multilevel emulation framework that facilitates the incorporation of detailed judgements regarding the uncertain links between the physically different model versions. We show how this approach naturally fits within the iterative history matching process, whereby regions of input parameter space are identified that may lead to acceptable matches between model output and the real universe, given all major sources of uncertainty. We will briefly discuss the detailed assessment of such uncertainties as observation error and structural model discrepancies and their various components, and emphasise that without such assessments any such analysis rapidly loses meaning.
UNQW04 12th April 2018
15:30 to 16:00
Matthew Pratola A Comparison of Approximate Bayesian Computation and Stochastic Calibration for Spatio-Temporal Models of High-Frequency Rainfall Patterns
Modeling complex environmental phenomena such as rainfall patterns has proven challenging due to the difficulty in capturing heavy-tailed behavior, such as extreme weather, in a meaningful way. Recently, a novel approach to this task has taken the form of so-called stochastic weather generators, which use statistical formulations to emulate the distributional patterns of an environmental process. However, while sampling from such models is usually feasible, they typically do not possess closed-form likelihood functions, rendering the usual approaches to model fitting infeasible. Furthermore, some of these stochastic weather generators are now becoming so complex that even simulating from them can be computationally expensive. We propose and compare two approaches to fitting computationally expensive stochastic weather generators motivated by Approximate Bayesian Computation and Stochastic Simulator Calibration methodologies. The methods are then demonstrated by estimating important parameters of a recent stochastic weather generator model applied to rainfall data from the continental USA.
UNQW04 13th April 2018
09:00 to 10:00
Luc Pronzato Bayesian quadrature, energy minimization and kernel herding for space filling design
A standard objective in computer experiments is to predict the behaviour of an unknown function on a compact domain from a few evaluations inside the domain. When little is known about the function, space-filling design is advisable: typically, points of evaluation spread out across the available space are obtained by minimizing a geometrical (for instance, minimax-distance) or a discrepancy criterion measuring distance to uniformity. We shall make a survey of some recent results on energy functionals, and investigate connections between design for integration (quadrature design), construction of the (continuous) BLUE for the location model, and minimization of energy (kernel discrepancy) for signed measures. Integrally strictly positive definite kernels define strictly convex energy functionals, with an equivalence between the notions of potential and directional derivative for smooth kernels, showing the strong relation between discrepancy minimization and more traditional design of optimal experiments. In particular, kernel herding algorithms are special instances of vertex-direction methods used in optimal design, and can be applied to the construction of point sequences with suitable space-filling properties. The presentation is based on recent work with A.A. Zhigljavsky (Cardiff University).
UNQW04 13th April 2018
10:00 to 10:30
Serge Guillas Computer model calibration with large nonstationary spatial outputs: application to the calibration of a climate model
Bayesian calibration of computer models tunes unknown input parameters by comparing outputs to observations. For model outputs distributed over space, this becomes computationally expensive due to the output size. To overcome this challenge, we employ a basis representations of the model outputs and observations: we match these decompositions to efficiently carry out the calibration. In a second step, we incorporate the nonstationary behavior, in terms of spatial variations of both variance and correlations, into the calibration. We insert two INLA-SPDE parameters into the calibration. A synthetic example and a climate model illustration highlight the benefits of our approach.
UNQ 2nd May 2018
11:00 to 13:00
Matthew Plumlee Inexact computer model calibration: Concerns, controversy, credibility, and confidence
There has been a recent surge in statistical methods for calibration of inexact models in the most basic of settings.  Alongside these developments, a controversy has emerged about the goals of calibration of inexact models.  This talk will trace a swath of research stemming from about twenty years ago and potential concerns are marked along the way.  The talk will also present some new ideas in this setting that might help close some of these philosophical and practical issues.



UNQ 4th May 2018
11:00 to 13:00
Matt Dunlop Deep Gaussian Processes
UNQ 9th May 2018
11:00 to 13:00
Arbaz Khan Stochastic Galerkin mixed finite element approximation for parameter-dependent linear elasticity equations.
UNQ 10th May 2018
14:00 to 15:00
Albert Cohen Optimal Weighted Least Squares Methods for High Dimensional Approximation and Estimation.
UNQ 14th May 2018
11:00 to 13:00
Viet Ha Hoang Multilevel Markov Chain Monte Carlo finite element method for Bayesian inversion
UNQ 16th May 2018
11:00 to 13:00
John Paul Gosling Ensuring monotonicity in emulation
In this informal seminar, I will describe attempts to force emulators to have monotonic or convex outputs with respect to some of the input parameters. Various prior set-ups will be discussed including piecewise linear approximations, truncated Gaussian processes and non-linear transformations of Gaussian processes alongside computational methods such as Bayes linear updating and approximate Bayesian computations.



UNQ 18th May 2018
11:00 to 13:00
Hailiang Du Evaluating probabilistic forecasts - beyond proper skill scores
UNQ 21st May 2018
14:00 to 16:00
Paul Constantine Three of eleven topics on my mind : "Choose your own adventure"
UNQ 23rd May 2018
11:00 to 13:00
Francois-Xavier Briol Stein Points: Efficient sampling from posterior distributions by minimising Stein Discrepancies.
An important task in computational statistics and machine learning is to approximate a posterior distribution with an empirical measure supported on a set of representative points. This work focuses on methods where the selection of points is essentially deterministic, with an emphasis on achieving accurate approximation when the number of points is small. To this end, we present `Stein Points'. The idea is to exploit either a greedy or a conditional gradient method to iteratively minimise a kernel Stein discrepancy between the empirical measure and the target measure. Our empirical results demonstrate that Stein Points enable accurate approximation of the posterior at modest computational cost. In addition, theoretical results are provided to establish convergence of the method.
UNQ 29th May 2018
14:00 to 15:00
Sebastian Ullmann Stochastic Galerkin reduced basis methods for parametrized random elliptic PDEs
UNQ 30th May 2018
11:00 to 13:00
Ronald DeVore Parameter estimation in parametric pdes
UNQ 1st June 2018
11:00 to 13:00
Francois Bachoc Consistency of stepwise uncertainty reduction strategies for Gaussian processes
In the first part of the talk, we will introduce spatial Gaussian processes. Spatial Gaussian processes are widely studied from a statistical point of view, and have found applications in many fields, including geostatistics, climate science and computer experiments. Exact inference can be conducted for Gaussian processes, thanks to the Gaussian conditioning theorem. Furthermore, covariance parameters can be estimated, for instance by Maximum Likelihood. In the second part of the talk, we will introduce a class of iterative sampling strategies for Gaussian processes, called 'stepwise uncertainty reduction' (SUR). We will give examples of SUR strategies which are widely applied to computer experiments, for instance for optimization or detection of failure domains. We will provide a general consistency result for SUR strategies, together with applications to the most standard examples.




UNQ 5th June 2018
14:00 to 15:00
Joakim Beck Multilevel methods with importance sampling for Bayesian experimental design
UNQ 8th June 2018
11:00 to 13:00
Thomas Santner A Bayesian Composite Gaussian Process Model and its Application
This talk will describe a flexible Bayesian model that can be used to predict the output of a deterministic simulator code. The model assumes that the output can be described as the sum of a smooth global trend plus deviations from the global trend. The global trend and the local deviations are modeled as draws from independent GPs with separable correlation functions subject to appropriate constraints to enforce smoothness of the global process compared with the local deviation process. The accuracy and limitations of predictions made using this model are demonstrated in a series of examples. The model is used to perform variable selection by identifying the most active inputs to the simulator. Inputs having ``smaller'' posterior distributions of the model's correlation parameters are judged to be more active. A reference inactive input is added to the data to judge the size of the correlation parameter for inactive inputs. Joint work with Casey Davis and Christopher Hans
OFBW39 15th June 2018
10:00 to 10:10
Christie Marr, Jane Leeks Welcome and Introduction
OFBW39 15th June 2018
10:10 to 10:20
Peter Challenor Outline and Summary of INI Research Programme 'Uncertainty Quantification for Complex Systems: Theory and Methodologies
OFBW39 15th June 2018
10:20 to 11:05
Max Gunzburger Surrogate Modelling
OFBW39 15th June 2018
11:05 to 11:35
James Finigan Examples of Uncertainity and Future Challenges in Defra's Environmental Models
OFBW39 15th June 2018
11:50 to 12:35
David Woods Design of Computational and Physical Experiments for Uncertainty Quantification
OFBW39 15th June 2018
12:35 to 13:05
Andrew Haslett Making Business Decisions under Uncertainty
OFBW39 15th June 2018
14:00 to 14:45
Aretha Teckentrup Uncertainty Quantification in Inverse Problems
OFBW39 15th June 2018
14:45 to 15:30
Richard Wilkinson Multilevel and Multi-Fidelity Methods
OFBW39 15th June 2018
15:30 to 16:00
Panel Discussion and Questions
UNQ 18th June 2018
11:00 to 13:00
Melina Freitag Balanced model order reduction for linear systems driven by Lévy noise
When solving linear stochastic differential equations numerically, usually a high order spatial discretisation is used. Balanced truncation (BT) is a well-known projection technique in the deterministic framework which reduces the order of a control system and hence reduces computational complexity. We give an introduction to model order reduction (MOR) by BT and then consider a differential equation where the control is replaced by a noise term. We provide theoretical tools such as stochastic concepts for reachability and observability, which are necessary for balancing related MOR of linear stochastic differential equations with additive L'evy noise. Moreover, we derive error bounds for BT and provide numerical results for a specific example which support the theory. This is joint work with Martin Redmann (WIAS Berlin).
UNQ 19th June 2018
14:00 to 15:00
David Silvester UQ: does it require efficient linear algebra?
We discuss the key role that bespoke linear algebra plays in modelling PDEs with random coefficients using stochastic Galerkin approximation methods. As a specific example, we consider nearly incompressible linear elasticity problems with an uncertain spatially varying Young's modulus. The uncertainty is modelled with a finite set of parameters with prescribed probability distribution. We introduce a novel three-field mixed variational formulation of the PDE model and focus on the efficient solution of the associated high-dimensional indefinite linear system of equations. Eigenvalue bounds for the preconditioned system are established and shown to be independent of the discretisation parameters and the Poisson ratio. If time permits, we will also discuss the efficient solution of incompressible fluid flow problems with uncertain viscosity. This is joint work with Arbaz Khan and Catherine Powell.
UNQ 20th June 2018
10:30 to 13:00
Clémentine Prieur GOAL-ORIENTED ERROR ESTIMATION FOR PARAMETER-DEPENDENT NONLINEAR PROBLEMS, APPLICATION TO SENSITIVITY ANALYSIS
During this talk, we will present a numerically efficient method to bound the error that is made when approximating the output of a nonlinear problem depending on an unknown parameter (described by a probability distribution). The class of nonlinear problems under consideration includes high-dimensional nonlinear problems with a nonlinear output function. A goal-oriented probabilistic bound is computed by considering two phases. An offline phase dedicated to the computation of a reduced model during which the full nonlinear problem needs to be solved only a small number of times. The second phase is an online phase which approximates the output. This approach is applied to a toy model and to a nonlinear partial differential equation, more precisely the Burgers equation with unknown initial condition given by two probabilistic parameters. The savings in computational cost are evaluated and presented.
UNQ 20th June 2018
14:05 to 14:55
Henry Wynn MSG Design of Experiments Seminar Series: The war against bias: experimental design for big data
The talk first reviews  work (by others) on optimal experimental design for “big data”. This ranges from methods arising from the social and medical  sciences, particularly in causal modelling, to recent work which tries to extract an optimum design from a loosely structured data set of covariates and also the literature on optimal design to guard against bias. The authors draw on some of  this work but take a more game-theoretic approach. The idea is that the causal modelling operation, run by an notional “Alice”, needs a shield protecting against bias built by a notional “Bob”. The two operation can act  harmoniously  when the joint operation is a over product space but, even when not,  a Nash equilibrium may be achievable, which balances the two objectives.

Joint work with Elena Pesce and Eva Riccomagno.
UNQ 20th June 2018
14:55 to 15:45
Xun Huan MSG Design of Experiments Seminar Series: Simulation-based Bayesian experimental design for computationally intensive models
Selecting and performing experiments that produce the most useful data is extremely valuable in engineering and science applications where experiments are costly and resources are limited. Simulation-based experimental design thus provides a rigorous mathematical framework to systematically quantify and maximize the value of experiments while leveraging the existing knowledge and predictive capability of an available model.  We are particularly interested in design settings that accommodate nonlinear and computationally intensive models, such as those governed by ordinary and partial differential equations. Employing principles from Bayesian statistics to characterize and quantify uncertainty, we seek experiments that maximize the expected information gain. Computing these optimal designs using conventional approaches, however, is generally intractable. Major challenges include high dimensional parameter spaces, expensive model simulations, and numerical approximation and optimization of the expected information gain. We thus describe practical numerical methods to help overcome these obstacles, including global sensitivity analysis, surrogate modeling via polynomial chaos, and stochastic optimization.  The overall methodology is demonstrated through the design of combustion experiments for optimal learning of chemical rate parameters, and of configurations for a supersonic jet engine to obtain measurements most informative on turbulent flow parameters.
UNQ 20th June 2018
16:10 to 17:00
Serge Guillas MSG Design of Experiments Seminar Series: Mutual Information for Computer Experiments (MICE): design, optimization, and data assimilation: applications to tsunami hazard
We present a new method for the design of computer experiments. The sequential design algorithm MICE (Mutual Information for Computer Experiments) adaptively selects the input values at which to run the computer simulator, in order to maximize the expected information gain (mutual information) over the input space. The superior computational efficiency of MICE compared to other algorithms is demonstrated on test functions, and on the tsunami model VOLNA with overall gains of 20-50%. Moreover, there is a clear computational advantage in building a design of computer experiments solely on a subset of active variables. However, this prior selection inflates the limited computational budget. We thus interweave MICE with a screening algorithm to improve the overall efficiency of building an emulator. This approach allows us to assess future tsunami risk for complex earthquake sources over Cascadia. An application to optimization of expensive black-box functions using MICE is also introduced. It is then employed in a data assimilation scheme to design an optimal network of buoys near shore for the purpose of detecting incoming tsunamis.
UNQ 25th June 2018
11:00 to 13:00
Spyros Skoulakis Managing Model Risk in Banking
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons