09:00 to 09:35 Registration 09:35 to 09:45 Welcome from Christie Marr (INI Deputy Director) 09:45 to 10:30 Wolfgang Dahmen (RWTH Aachen University)Parametric PDEs: Sparse Polynomial or Low-Rank Approximation? We discuss some recent results obtained jointly with Markus Bachmayr and Albert Cohen on the adaptive approximation of parametric solutions for a class of uniformly elliptic parametric PDEs. We briefly review first essential approximability properties of the solution manifold with respect to several approximation types (sparse polynomial expansions, low-rank approximations, hierarchical tensor formats) which then serve as benchmarks for numerical algorithms. We then discuss a fully adaptive algorithmic template with respect to both spatial and parametric variables which can be specified to any of the above approximation types. It completely avoids the inversion of large linear systems and can be shown to achieve any given target accuracy with a certified bound without any a priori assumptions on the solutions. Moreover, the computational complexity (in terms of operation count) can be proven to be optimal (up to uniformly constant factors) for these benchmark classes. That is, it achieves a given target accuracy by keeping the number of adaptively generated degrees of freedom near-minimal at linear computatiuonal cost. We discuss these findings from several perspectives such as: which approximation type is best suited for which problem specification, the role of parametric expansion types, or intrusive versus non-intrusive schemes. INI 1 10:30 to 11:00 Morning Coffee 11:00 to 11:45 Jim Gattiker (Los Alamos National Laboratory)Complexity Challenges in Uncertainty Quantification for Scientific and Engineering Applications. Uncertainty Quantification (UQ) is established as an aspect of model-supported inference in scientific and engineering systems. With this expectation comes the desire for addressing increasingly complex applications that challenge the ability of UQ to scale. This talk will describe the view of UQ as a full-system modeling and analysis framework for scientific and engineering models, motivated by the example application of Carbon Capture technology development, and some of the challenging issues that have come up in this multi-level component to full-system modeling effort are discussed, as well as strategies for addressing some of these challenges. Another challenge area is dealing with multivariate responses, and some implications and challenges in this area will be discussed. INI 1 11:45 to 12:30 Leanna House (Virginia Polytechnic Institute and State University)Human-in-the-Loop Analytics: Two Approaches and Two Applications This will be a two-part talk that presents two applications of human-in-the-loop analytics. The first part takes a traditional approach in eliciting judgement from experts to specify subjective priors in the context of uncertainty quantification of simulators. In particular, the approach applies and verifies a method called Reification (Goldstein and Rougier, 2008), where experts initiate uncertainty analyses by specifying a hypothetical, high-fidelity computer model. With this hypothetical model, we can decompose potential discrepancy between a given simulator and reality. The second part of the talk places experts in the middle of analyses via Bayesian Visual Analytics (House et al., 2015) so that experts may explore data and offer feedback continuously. For BaVA, we use reduced-dimensional visualizations within an interactive software so that experts may communicate their judgements by interacting with data. Based on interactions, we parameterize and specify feedback distributions'', rather than prior distributions, for analyses. We exemplify BaVA using a dataset about animals. To conclude, I hope to engage in an open discussion for how we can use BaVA in Uncertainty Quantification of Computer Models. INI 1 12:30 to 13:30 Lunch @ Churchill College 14:00 to 14:45 Francisco Alejandro Diaz De la O (University of Liverpool)Reliability-based Sampling for Model Calibration History Matching is a calibration technique that systematically reduces the input space in a numerical model. At every iteration, an implausibility measure discards combinations of input values that are unlikely to provide a match between model output and experimental observations. As the input space reduces, sampling becomes increasingly challenging due to the size of the relative volume of the non-implausible space and the fact that it can exhibit a complex, disconnected geometry. Since realistic numerical models are computationally expensive, surrogate models and dimensionality reduction are commonly employed. In this talk we will explore how Subset Simulation, a Markov chain Monte Carlo technique from engineering reliability analysis, can solve the sampling problem in History Matching. We will also explore alternative implausibility measures that can guide the selection of regions of the non-implausible in order to balance sampling exploration and exploitation. INI 1 14:45 to 15:30 Tan Bui-Thanh (University of Texas at Austin)A Triple Model Reduction for Data-Driven Large-Scale Inverse Problems in High Dimensional Parameter Spaces Co-authors: Ellen Le (The University of Texas At Austin), Aaron Myers (The University of Texas At Austin), Brad Marvin (The University of Texas At Austin), Vishwas Rao (Argone National Lab) We present an approach to address the challenge of data-driven large-scale inverse problems in high dimensional parameter spaces. The idea is to combine a goal-oriented model reduction approach for state, data-informed/active-subspace reduction for parameter, and randomized misfit approach for data reduction. The method is designed to mitigate the bottle neck of large-scale PDE solve, of high dimensional parameter space exploration, and of ever-increasing volume of data. Various theoretical and numerical results will be presented to support the proposed approach. INI 1 15:30 to 16:00 Afternoon Tea 16:00 to 16:45 Poster Blitz INI 1 17:00 to 18:00 Poster Session & Welcome Wine Reception at INI
 09:00 to 09:45 Gianluigi Rozza (SISSA)Weighted reduced order methods for parametrized PDEs with random inputs In this talk we discuss a weighted approach for the reduction of parametrized partial differential equations with random input, focusing in particular on weighted approaches based on reduced basis (wRB) [Chen et al., SIAM Journal on Numerical Analysis (2013); D. Torlo et al., submitted (2017)] and proper orthogonal decomposition (wPOD) [L. Venturi et al., submitted (2017)]. We will first present the wPOD approach. A first topic of discussion is related to the choice of samples and respective weights according to a quadrature formula. Moreover, to reduce the computational effort in the offline stage of wPOD, we will employ Smolyak quadrature rules. We will then introduce the wRB method for advection diffusion problems with dominant convection with random input parameters. The issue of the stabilisation of the resulting reduced order model will be discussed in detail. This work is in collaboration with Francesco Ballarin (SISSA, Trieste), Davide Torlo (University of Zurich), and Luca Venturi (Courant Institute for Mathematical Sciences, NYC) INI 1 09:45 to 10:30 Julia Brettschneider (University of Warwick)Model selection, model frames, and scientific interpretation Modelling complex systems in engineering, science or social science involves selection of measurements on many levels including observability (determined e.g. by technical equipment, cost, confidentiality, existing records) and need for interpretability. Among the initially selected variables, the frequency and quality of observation may be altered by censoring and sampling biases. A model is, by definition, a simplification, and the question one asks is often not whether a certain effect exists, but whether it matters. This crucially depends on the research objective or perspective. Biased conclusions occur when the research question is interwoven with the mechanisms in which the variables for the analysis are selected or weighted. Such effects can occur in any applications that involve observational data. I will give some examples from a few of my own research projects involving quality assessment, decision making, financial trading, genomics and microscopy. INI 1 10:30 to 11:00 Morning Coffee 11:00 to 11:45 Olga Mula (Université Paris-Dauphine)Greedy algorithms for optimal measurements selection in state estimation using reduced models Co-authors: Peter BINEV (University of South Carolina), Albert COHEN (University Pierre et Marie Curie), James NICHOLS (University Pierre et Marie Curie)Parametric PDEs of the general form$$\mathcal{P} (u,a) = 0$$are commonly used to describe many physical processes, where $\cal P$ is a differential operator, $a$ is a high-dimensional vector of parameters and $u$ is the unknown solution belonging to some Hilbert space $V$. A typical scenario in state estimation is the following: for an unknown parameter $a$, one observes $m$ independent linear measurements of $u(a)$ of the form $\ell_i(u) = (w_i, u), i = 1, ..., m$, where $\ell_i \in V'$ and $w_i$ are the Riesz representers, and we write $W_m = \text{span}\{w_1,...,w_m\}$. The goal is to recover an approximation $u^*$ of $u$ from the measurements. Due to the dependence on a the solutions of the PDE lie in a manifold and the particular PDE structure often allows to derive good approximations of it by linear spaces Vn of moderate dimension n. In this setting, the observed measurements and Vn can be combined to produce an approximation $u^*$ of $u$ up to accuracy$$\Vert u -u^* \Vert \leq \beta(V_n, W_m) \text{dist}(u, V_n)$$where$$\beta(V_n, W_m) := \inf_{v\in V_n} \frac{\Vert P_{W_m} v \Vert}{\Vert v \Vert}$$plays the role of a stability constant. For a given $V_n$, one relevant objective is to guarantee that $\beta(V_n, W_m) \geq \gamma >0$ with a number of measurements $m \geq n$ as small as possible. We present results in this direction when the measurement functionals $\ell_i$ belong to a complete dictionary. If time permits, we will also briefly explain ongoing research on how to adapt the reconstruction technique to noisy measurements.Related Linkshttps://hal.archives-ouvertes.fr/hal-01638177/document - Preprint INI 1 11:45 to 12:30 Martin Stoll (Technische Universität Chemnitz)Low rank methods for PDE-constrained optimization Optimization subject to PDE constraints is crucial in many applications . Numerical analysis has contributed a great deal to allow for the efficient solution of these problems and our focus in this talk will be on the solution of the large scale linear systems that represent the first order optimality conditions. We illustrate that these systems, while being of very large dimension, usually contain a lot of mathematical structure. In particular, we focus on low-rank methods that utilize the Kronecker product structure of the system matrices. These methods allow the solution of a time-dependent problem with the storage requirements of a small multiple of the steady problem. Furthermore, this technique can be used to tackle the added dimensionality when we consider optimization problems subject to PDEs with uncertain coefficients. The stochastic Galerkin FEM technique leads to a vast dimensional system that would be infeasible on any computer but using low-rank techniques this can be solved on a standard laptop computer. INI 1 12:30 to 13:30 Lunch @ Churchill College 14:00 to 14:45 Boris Kramer (Massachusetts Institute of Technology)Conditional-Value-at-Risk Estimation with Reduced-Order Models We present two reduced-order model based approaches  for the efficient and accurate evaluation of the Conditional-Value-at-Risk  (CVaR) of quantities of interest (QoI) in engineering systems with uncertain parameters.  CVaR is used to model objective or constraint functions in risk-averse engineering design and optimization applications under uncertainty.  Estimating the CVaR of the QoI is expensive. While the distribution of the uncertain system parameters is known, the resulting QoI is a random variable that is implicitly determined via the state of the system. Evaluating the CVaR of the QoI requires  sampling in the tail of the QoI distribution and typically requires  many solutions of an expensive full-order model of the engineering system. Our reduced-order model approaches substantially reduce this computational expense. INI 1 14:45 to 15:30 Olivier Zahm (Massachusetts Institute of Technology)Certified dimension reduction of the input parameter space of vector-valued functions Co-authors: Paul Constantine (University of Colorado), Clémentine Prieur (University Joseph Fourier), Youssef Marzouk (MIT) Approximation of multivariate functions is a difficult task when the number of input parameters is large. Identifying the directions where the function does not significantly vary is a key preprocessing step to reduce the complexity of the approximation algorithms. Among other dimensionality reduction tools, the active subspace is defined by means of the gradient of a scalar-valued function. It can be interpreted as the subspace in the parameter space where the gradient varies the most. In this talk, we propose a natural extension of the active subspace for vector-valued functions, e.g. functions with multiple scalar-valued outputs or functions taking values in function spaces. Our methodology consists in minimizing an upper-bound of the approximation error obtained using Poincaré-type inequalities. We also compare the proposed gradient-based approach with the popular and widely used truncated Karhunen-Loève decomposition (KL). We show that, from a theoretical perspective, the truncated KL can be interpreted as a method which minimizes a looser upper bound of the error compared to the one we derived. Also, numerical comparisons show that better dimension reduction can be obtained provided gradients of the function are available. INI 1 15:30 to 16:00 Afternoon Tea 16:00 to 16:45 James Salter (University of Exeter)Quantifying spatio-temporal boundary condition uncertainty for the deglaciation Ice sheet models are currently unable to reproduce the retreat of the North American ice sheet through the last deglaciation, due to the large uncertainty in the boundary conditions. To successfully calibrate such a model, it is important to vary both the input parameters and the boundary conditions. These boundary conditions are derived from global climate model simulations, and hence the biases from the output of these models are carried through to the ice sheet output, restricting the range of ice sheet output that is possible. Due to the expense of running global climate models for the required 21,000 years, there are only a small number of such runs available; hence it is difficult to quantify the boundary condition uncertainty. We develop a methodology for generating a range of plausible boundary conditions, using a low-dimensional basis representation for the spatio-temporal input required. We derive this basis by combining key patterns, extracted from a small climate model ensemble of runs through the deglaciation, with sparse spatio-temporal observations. Varying the coefficients for the chosen basis vectors and ice sheet parameters simultaneously, we run ensembles of the ice sheet model. By emulating the ice sheet output, we history match iteratively and rule out combinations of the ice sheet parameters and boundary condition coefficients that lead to implausible deglaciations, reducing the uncertainty due to the boundary conditions. INI 1
 09:00 to 09:45 Peter Binev (University of South Carolina)State Estimation in Reduced Modeling Co-authors: Albert Cohen (University Paris 6), Wolfgang Dahmen (University of South Carolina), Ronald DeVore (Texas A&M University), Guergana Petrova (Texas A&M University), Przemyslaw Wojtaszczyk (University of Warsaw) We consider the problem of optimal recovery of an element u of a Hilbert space H from measurements of the form l_j(u), j = 1, ... , m, where the l_j are known linear functionals on H. Motivated by reduced modeling for solving parametric partial differential equations, we investigate a setting where the additional information about the solution u is in the form of how well u can be approximated by a certain known subspace V_n of H of dimension n, or more generally, in the form of how well u can be approximated by each of a sequence of nested subspaces V_0, V_1, ... , V_n with each V_k of dimension k. The goal is to exploit additional information derived from the whole hierarchy of spaces rather than only from the largest space V_n. It is shown that, in this multispace case, the set of all u that satisfy the given information can be described as the intersection of a family of known ellipsoidal cylinders in H and that a near optimal recovery algorithm in the multi-space pr oblem is provided by identifying any point in this intersection. INI 1 09:45 to 10:30 Catherine Powell (University of Manchester)Reduced Basis Solvers for Stochastic Galerkin Matrix Equations In the applied mathematics community, reduced basis methods are typically used to reduce the computational cost of applying sampling methods to parameter-dependent partial differential equations (PDEs). When dealing with PDE models in particular, repeatedly running computer models (eg finite element solvers) for many choices of the input parameters, is computationally infeasible. The cost of obtaining each sample of the numerical solution is instead sought by projecting the so-called high fidelity problem into a reduced (lower-dimensional) space. The choice of reduced space is crucial in balancing cost and overall accuracy. In this talk, we do not consider sampling methods. Rather, we consider stochastic Galerkin finite element methods (SGFEMs) for parameter-dependent PDEs. Here, the idea is to approximate the solution to the PDE model as a function of the input parameters. We combine finite element approximation in physical space, with global polynomial approximation on the parameter domain. In the statistics community, the term intrusive polynomial chaos approximation is often used. Unlike samping methods, which require the solution of many deterministic problems, SGFEMs yield a single very large linear system of equations with coefficient matrices that have a characteristic Kronecker product structure. By reformulating the systems as multiterm linear matrix equations, we have developed [see: C.E. Powell, D. Silvester, V.Simoncini, An efficient reduced basis solver for stochastic Galerkin matrix equations, SIAM J. Comp. Sci. 39(1), pp A141-A163 (2017)] a memory-efficient solution algorithm which generalizes ideas from rational Krylov subspace approximation (which are known in the linear algebra community). The new approach determines a low-rank approximation to the solution matrix by performing a projection onto a reduced space that is iteratively augmented with problem-specific basis vectors. Crucially, it requires far less memory than standard iterative methods applied to the Kronecker formulation of the linear systems. For test problems consisting of elliptic PDEs, and indefinite problems with saddle point structure, we are able to solve systems of billions of equations on a standard desktop computer quickly and efficiently. INI 1 10:30 to 11:00 Morning Coffee 11:00 to 11:45 Benjamin Peherstorfer (University of Wisconsin-Madison)Multifidelity Monte Carlo estimation with adaptive low-fidelity models Multifidelity Monte Carlo (MFMC) estimation combines low- and high-fidelity models to speedup the estimation of statistics of the high-fidelity model outputs. MFMC optimally samples the low- and high-fidelity models such that the MFMC estimator has minimal mean-squared error for a given computational budget. In the setup of MFMC, the low-fidelity models are static, i.e., they are given and fixed and cannot be changed and adapted. We introduce the adaptive MFMC (AMFMC) method that splits the computational budget between adapting the low-fidelity models to improve their approximation quality and sampling the low- and high-fidelity models to reduce the mean-squared error of the estimator. Our AMFMC approach derives the quasi-optimal balance between adaptation and sampling in the sense that our approach minimizes an upper bound of the mean-squared error, instead of the error directly. We show that the quasi-optimal number of adaptations of the low-fidelity models is bounded even in the limit case that an infinite budget is available. This shows that adapting low-fidelity models in MFMC beyond a certain approximation accuracy is unnecessary and can even be wasteful. Our AMFMC approach trades-off adaptation and sampling and so avoids over-adaptation of the low-fidelity models. Besides the costs of adapting low-fidelity models, our AMFMC approach can also take into account the costs of the initial construction of the low-fidelity models (offline costs''), which is critical if low-fidelity models are computationally expensive to build such as reduced models and data-fit surrogate models. Numerical results demonstrate that our adaptive approach can achieve orders of magnitude speedups compared to MFMC estimators with static low-fidelity models and compared to Monte Carlo estimators that use the high-fidelity model alone. INI 1 11:45 to 12:30 Anthony Nouy (Université de Nantes)Principal component analysis for learning tree tensor networks We present an extension of principal component analysis for functions of multiple random variables and an associated algorithm for the approximation of such functions using tree-based low-rank formats (tree tensor networks). A multivariate function is here considered as an element of a Hilbert tensor space of functions defined on a product set equipped with a probability measure. The algorithm only requires evaluations of functions on a structured set of points which is constructed adaptively. The algorithm constructs a hierarchy of subspaces associated with the different nodes of a dimension partition tree and a corresponding hierarchy of projection operators, based on interpolation or least-squares projection. Optimal subspaces are estimated using empirical principal component analysis of interpolations of partial random evaluations of the function. The algorithm is able to provide an approximation in any tree-based format with either a prescribed rank or a prescribed relative error, with a number of evaluations of the order of the storage complexity of the approximation format. INI 1 12:30 to 13:30 Lunch @ Churchill College 14:00 to 14:45 Discussion INI 1