skip to content
 

Timetable (DAEW05)

Accelerating Industrial Productivity via Deterministic Computer Experiments & Stochastic Simulation

Monday 5th September 2011 to Friday 9th September 2011

Monday 5th September 2011
09:00 to 09:55 Registration
09:55 to 10:00 D Wallace
Welcome from INI Director, Professor Sir David Wallace
INI 1
10:00 to 10:20 T Santner (Ohio State University)
Conference Organisation and Goals
INI 1
10:20 to 10:45 L Schruben (University of California, Berkeley)
Savings Millions (lives and money) with Simulation Experiments
There are fundamental differences between real]world and computer simulation experiments. In simulations, an experimenter has full control over everything ] time, uncertainty, causality, structure, environment, etc. This talk demonstrates how synergies in simulation experiments, models, codes, and analysis sometimes make these indistinguishable. This motivates us to revise two famous quotes about statistical modeling and experimental design in the simulation context. George E. P. Box, one of the most prolific statisticians in recent history, is attributed with the quote gAll models are wrong; some are usefulh. A simulation model and experiment that is designed for a specific analytical purpose may be both more wrong and more useful. Sir R. A. Fisher, credited as the creator of statistical experimental design, is often quoted as saying, gThe best time to design an experiment is after youfve run it.h The best time to design a simulation model and experiment is often while youfre running it. Examples are presented where analysis]specific simulations and simulation]specific experiments have greatly improved the production and distribution of life]saving biopharmaceuticals. However, the methods apply more generally to production and service systems.
INI 1
10:45 to 11:00 Discussion INI 1
11:00 to 11:30 Coffee / Tea
11:30 to 11:55 D Higdon
Emulating the Nonlinear Matter Power Spectrum for the Universe
INI 1
11:55 to 12:10 Discussion INI 1
12:10 to 12:30 Poster Preview (All Poster Presenters) INI 1
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:30 G Reinman
Design for Variation
INI 1
14:30 to 15:00 A Forrester (University of Southampton)
Design and analysis of variable fidelity multi-objective experiments
At the core of any design process is the need to predict performance and vary designs accordingly. Performance prediction may come in many forms, from back-of-envelope through high fidelity simulations to physical testing. Such experiments may be one- or two-dimensional simplifications and may include all or some environmental factors. Traditional practice is to increase the fidelity and expense of the experiments as the design progresses, superseding previous low-fidelity results. However, by employing a surrogate modelling approach, all results can contribute to the design process. This talk presents the use of nested space filling experimental designs and a co-Kriging based multi-objective expected improvement criterion to select pareto optimal solutions. The method is applied to the design of an unmanned air vehicle wing and the rear wing of a race-car.
INI 1
15:00 to 15:30 Coffee / Tea
15:30 to 16:00 P Challenor (National Oceanography Centre, Southampton)
The Design of Validation Experiments
When we build an emulator to analyse a computer experiment the normal practice is to use a two stage approach to design. An initial space filling design is used to build the emulator. A second experiment is then carried out to validate the emulator. In this paper I will consider what form this validation experiment should take. Current practice is to use another space filling design, unrelated to the first. Clearly we want our validation design to be space filling, we want to validate the emulator everywhere. But we might have other criteria as well for validation. For instance we might want to make sure that we validate our estimate of the spatial scales in which case we want our validation design to include points at varying distances from our original design.
INI 1
16:00 to 16:30 S Sain (National Center for Atmospheric Research)
Statistical calibration of complex computer models: A case study with the Lyon-Feder-Mobarry model of the magnetosphere
The magnetosphere is the region of the Earth's magnetic field that forms a protective bubble which impedes the transfer of energy and momentum from solar wind plasma. The Lyon-Fedder-Mobarry model is used for coupled magnetosphere-ionosphere simulation and to model the eff ect of electron storms on the upper atmosphere. This model is generally run in a high-performance computing environment and the output represents a bivariate spatial-temporal field. In this work, we outline an approach for calibrating this computer model and quantifying the uncertainty in the calibration parameters that combines the computationally expensive but sparser high-resolution output with the lower fidelity but computationally inexpensive low-resolution output.
INI 1
16:30 to 17:00 J Rougier (University of Bristol)
Emulating complex codes: The implications of using separable covariance functions
Emulators are crucial in experiments where the computer code is sufficiently expensive that the ensemble of runs cannot span the parameter space. In this case they allow the ensemble to be augmented with additional judgements concerning smoothness and monotonicity. The emulator can then replace the code in inferential calculations, but in my experience a more important role for emulators is in trapping code errors.

The theory of emulation is based around the construction of a stochastic processes prior, which is then updated by conditioning on the runs in the ensemble. Almost invariably, this prior contains a component with a separable covariance function. This talk considers exactly what this separability implies for the nature of the underlying function. The strong conclusion is that processes with separable covariance functions are second-order equivalent to the product of second-order uncorrelated processes.

This is an alarmingly strong prior judgement about the computer code, ruling out interactions. But, like the property of stationarity, it does not survive the conditioning process. The cautionary response is to include several regression terms in the emulator prior.
INI 1
17:15 to 18:15 J Stufken ([Georgia])
A Short Overview of Orthogonal Arrays
Combinatorial arrangements now known as orthogonal arrays were introduced for use in statistics in the 1940's. The primary purpose for their introduction was to guide the selection of level combinations in a fractional factorial experiment, and this is still an important reason for their interest in statistics. Various criteria based on statistical properties have been introduced over the years to distinguish between different orthogonal arrays of the same size, and some authors have attempted to enumerate all non-isomorphic arrays of small sizes. Orthogonal arrays also possess interesting relationships to several other combinatorial arrangements, including to error-correcting codes and Hadamard matrices. In this talk, aimed at a general mathematical audience, we will present a brief and selective overview of orthogonal arrays, including their existence, construction, and relationships to other arrangements.
INI 1
18:15 to 18:45 Welcome Wine Reception
18:45 to 19:30 Dinner at Wolfson Court
Tuesday 6th September 2011
09:00 to 09:30 B Tang (Simon Fraser University)
Construction of orthogonal and nearly orthogonal Latin hypercube designs for computer experiments
We present a method for constructing good designs for computer experiments. The method derives its power from its basic structure that builds large designs using small designs. We specialize the method for the construction of orthogonal Latin hypercubes and obtain many results along the way. In terms of run sizes, the existence problem of orthogonal Latin hypercubes is completely solved. We also present an explicit result showing that how large orthogonal Latin hypercubes can be constructed using small orthogonal Latin hypercubes. Another appealing feature of our method is that it can easily be adapted to construct other designs. We examine how to make use of the method to construct nearly orthogonal Latin hypercubes.
INI 1
09:30 to 10:00 D Woods (University of Southampton)
Weighted space-filling designs
We investigate methods of incorporating known dependencies between variables into design selection for computer codes. Adaptations of computer-generated coverage and spread designs are considered, with “distance” between two input points redefined to include a weight function. The methods can include quantitative and qualitative variables, and different types of prior information. They are particularly appropriate for computer codes where there may be large areas of the design space in which it is not scientifically useful or relevant to take an observation. The different approaches are demonstrated through interrogation of a computer model for atmospheric dispersion.
INI 1
10:00 to 10:30 B Iooss ([EDF R&D])
Space filling designs for computer experiments: some algorithms and numerical results on industrial problems
Complex computer codes, for instance simulating physical phenomena, are often too time expensive to be directly used to perform uncertainty, sensitivity, optimization and robustness analyses. A widely accepted method to circumvent this problem consists in replacing cpu time expensive computer models by cpu inexpensive mathematical functions, called metamodels. A necessary condition to a successful modelling of these computer experiments is to explore the whole space of variations of the computer model input variables. However in many industrial applications, we are faced to the harsh problem of the high dimensionality of the exploration space. In this communication, we will first focus on the metamodel validation process which consists in evaluating the metamodel predictivity with respect to the initial computer code. This step has to be realized with caution and robustness in industrial applications, especially in the framework of safety studies.

We propose and test an algorithm, which optimizes the distance between the validation points and the metamodel learning points in order to estimate the true metamodel predictivity with a minimum number of validation points. Comparisons with classical validation algorithms and application to a nuclear safety computer code show the relevance of this sequential validation design. Second, we will present some recent results about the properties of different space filling designs. In practice, one has to choose which design to use in an exploratory phase of a numerical model. We will show the usefulness of some classification tools, as those based on the minimal spanning trees. We adopt a numerical approach to compare the performance of different types of space filling designs, in function of their interpoint-distance, L2-discrepancies and various sub-projection properties. Finally, we will present two recent problems, posed in some industrial applications: the introductions of inequality constraints between the inputs of a space filling design and the building of space filling design mixing quantitative and qualitative factors.
INI 1
10:30 to 10:45 Discussion INI 1
11:00 to 11:30 Coffee / Tea
11:30 to 12:00 D Steinberg (Tel Aviv University)
Orthogonal nearly Latin hypercubes
INI 1
12:00 to 12:30 H Maruri-Aguilar (Queen Mary, University of London)
Sequential screening with elementary effects
The Elementary Effects (EE) method (Morris, 1991) is a simple but effective screening strategy. Starting from a number of initial points, the method creates random trajectories to then estimate factor effects. In turn, those estimates are used for factor screening. Recent research advances (Campolongo et al., 2004,2006) have enhanced the performance of the elementary effects method and the projections of the resulting design (Pujol, 2008). The presentation concentrates on a proposal (Boukouvalas et al., 2011) which turns the elementary effects method into a sequential design strategy. After describing the methodology, some examples are given and compared against the traditional EE method.
INI 1
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 15:00 G Reinman & A O'Hagan & D Bingham
Panel: Challenges When Interfacing Physical Experiments and Computer Models
INI 1
15:00 to 15:30 Coffee / Tea
15:30 to 16:00 W Notz (Ohio State University)
Batch sequential experimental designs for computer experiments
Finding optimal designs for computer experiments that are modeled using a stationary Gaussian Stochastic Process (GaSP) model is challenging because optimality criteria are usually functions of the unknown model parameters. One popular approach is to adopt sequential strategies. These have been shown to be very effective when the optimality criterion is formulated as an expected improvement function. Most of these sequential strategies assume observations are taken sequentially one at a time. However, when observations can be taken k at a time, it is not obvious how to implement sequential designs. We discuss the problems that can arise when implementing batch sequential designs and present several strategies for sequential designs taking observations in k-at-a-time batches. We illustrate these strategies with examples.
INI 1
16:00 to 16:30 B Jones (SAS Institute, Inc.)
Bridge Designs for Modeling Systems with Small Error Variance
A necessary characteristic of designs for deterministic computer simulations is that they avoid replication. This characteristic is also necessary for one-dimensional projections of the design, since it may turn out that only one of the design factors has any non-negligible effect on the response. Latin Hypercube designs have uniform one-dimensional projections are not efficient for fitting low order polynomials when there is a small error variance. D-optimal designs are very efficient for polynomial fitting but have substantial replication in projections. We propose a new class of designs that bridge the gap between Latin Hypercube designs and D-optimal designs. These designs guarantee a minimum distance between points in any one-dimensional projection. Subject to this constraint they are D-optimal for any pre-specified model.
INI 1
16:30 to 17:00 A Dean
Non-collapsing Spacing-filling Designs for Bounded Regions
Many physical processes can be described via mathematical models implemented as computer codes. Since a computer code may take hours or days to produce a single output, a cheaper surrogate model (emulator) may be fitted for exploring the region of interest. The performance of the emulator depends on the "space-filling" properties of the design; that is, how well the design points are spread throughout the experimental region. The output from many computer codes is deterministic, in which case no replications are required at, or near, any design point to estimate error variability. In addition, designs that do not replicate any value for any single input ("non-collapsing" designs) are the most efficient when one or more inputs turn out to have little effect on the response. An algorithm is described for constructing space-filling and non-collapsing designs for computer experiments when the input region is bounded.
INI 1
17:00 to 17:15 Discussion INI 1
17:15 to 18:15 Poster Session
18:45 to 19:30 Dinner at Wolfson Court
Wednesday 7th September 2011
09:30 to 09:40 B Nelson
A Visitors Guide to Experiment Design for Dynamic Discrete-Event Stochastic Simulation
INI 1
09:40 to 10:00 YJ Son (University of Arizona)
Distributed Federation of Multi-paradigm Simulations and Decision Models for Planning and Control
In this talk, we first discuss simulation-based shop floor planning and control, where 1) on-line simulation is used to evaluate decision alternatives at the planning stage, 2) the same simulation model (executing in the fast mode) used at the planning stage is used as a real-time task generator (real-time simulation) during the control stage, and 3) the real-time simulation drives the manufacturing system by sending and receiving messages to an executor. We then discuss how simulation-based shop floor planning and control can be extended to enterprise level activities (top floor). To this end, we discuss the analogies between the shop floor and top floor in terms of the components required to construct simulation-based planning and control systems such as resource models, coordination models, physical entities, and simulation models. Differences between them are also discussed in order to identify new challenges that we face for top floor planning and control. A major difference is the way a simulation model is constructed so that it can be used for planning, depending on whether time synchronization among member simulations becomes an issue or not. We also discuss the distributed computing platform using web services and grid computing technologies, which allow us to integrate simulation and decision models, and software and hardware components. Finally, we discuss DDDAMS (Dynamic Data-Driven Adaptive Multi-Scale Simulation) framework, where the aim is to augment the validity of simulation models in the most economic way via incorporating dynamic data into the executing model and the executing model's steering the measurement process for selective data update.
INI 1
10:00 to 10:30 M Marathe ([Virginia State University])
Policy Informatics for Co-evolving Socio-technical Networks: Issues in Believability and Usefulness
The talk outlines a high resolution interaction-based approach to support policy informatics for large co-evolving socio-technical networks. Such systems consist of a large number of interacting physical, technological, and human/societal components. Quantitative changes in HPC including faster machines and service-oriented software have created qualitative changes in the way information can be integrated in analysis of these large heterogeneous systems and supporting policy makers as they consider the pros and cons of various decision choices.

Agent-oriented simulation is an example of an interaction based computational technique useful for reasoning about biological, information and social networks. Developing scalable models raises important computational and conceptual issues, including computational efficiency, necessity of detailed representation and uncertainty quantification.

The talk will describe the development of high performance computing based crises management system called Comprehensive National Incident Management System (CNIMS). As an illustrative case study we will describe how CNIMS can be used for developing a scalable computer assisted decision support system for pandemic planning and response. We will conclude by discussing challenging validation and verification issues that arise when developing such models.
INI 1
10:30 to 10:45 Discussion INI 1
11:00 to 11:30 Coffee / Tea
11:30 to 12:00 B Nelson & B Ankenman (Northwestern University)
Enhancing Stochastic Kriging Metamodels with Stochastic Gradient Estimators
Stochastic kriging is the natural extension of kriging metamodels for the design and analysis of computer experiments to the design and analysis of stochastic simulation experiments where response variance may differ substantially across the design space. In addition to estimating the mean response, it is sometimes possible to obtain an unbiased or consistent estimator of the response-surface gradient from the same simulation runs. However, like the response itself, the gradient estimator is noisy. In this talk we present methodology for incorporating gradient estimators into response surface prediction via stochastic kriging, evaluate its effectiveness in improving prediction, and specifically consider two gradient estimators: the score function/likelihood ratio method and infinitesimal perturbation analysis.
INI 1
12:00 to 12:30 JPC Kleijnen (Universiteit van Tilburg)
Simulation optimization via bootstrapped Kriging: survey
This presentation surveys simulation optimization via Kriging (also called Gaussian Process or spatial correlation) metamodels. These metamodels may be analyzed through bootstrapping, which is a versatile statistical method but must be adapted to the specific problem being analyzed. More precisely, a random or discrete- event simulation may be run several times for the same scenario (combination of simulation input values); the resulting replicated responses may be resampled with replacement, which is called ìdistribution-free bootstrappingî. In engineering, however, deterministic simulation is often applied; such a simulation is run only once for the same scenario, so "parametric bootstrapping" is used. This bootstrapping assumes a multivariate Gaussian distribution, which is sampled after its parameters are estimated from the simulation input/output data. More specifically, this talk covers the following recent approaches: (1) Efficient Global Optimiz ation (EGO) via Expected Improvement (EI) using parametric bootstrapping to obtain an estimator of the Kriging predictor's variance accounting for the randomness resulting from estimating the Kriging parameters. (2) Constrained optimization via Mathematical Programming applied to Kriging metamodels using distribution-free bootstrapping to validate these metamodels. (3) Robust optimization accounting for an environment that is not exactly known (so it is uncertain); this optimization may use Mathematical Programming and Kriging with distribution-free bootstrapping to estimate the Pareto frontier. (4) Bootstrapped Kriging may preserve a characteristic such as monotonicity of the outputs as a function of the inputs.
INI 1
12:30 to 13:30 Lunch at Wolfson Court and informal discussions
13:30 to 18:00 Free Afternoon
19:30 to 22:00 Conference dinner at St Catharine's College
Thursday 8th September 2011
09:30 to 10:00 SG Henderson (Cornell University)
Input Uncertainty: Examples and Philosophies
I will present two examples that point to the importance of appreciating input uncertainty in simulation modeling. Both examples are set in the context of optimization problems where the objective function is estimated (with noise) through discrete-event simulation. I will then discuss some modeling philosophies adopted by the deterministic optimization research community in the context of input uncertainty, with the goal of identifying some modeling philosophies that might be appropriate for simulation optimization.
INI 1
10:00 to 10:30 S Chick (INSEAD)
Some Challenges with Input Uncertainty
In this short presentation, I will summarize several recent applications from various parts of health care that highlight several interesting challenges with modeling and input uncertainty. Beyond the usual challenges associated with Bayesian model average-type approaches for parameter uncertainty about statistical input parameters, some challenges will also be identified. They include the difficulty that some decision makers have in thinking about data for “input” parameters when “output” data is easier to observe, data on intermediate or surrogate endpoints may be easier or less expensive to collect, and the structure of the model that translates inputs to outputs itself might be uncertain. These examples and challenges have a number of implications, many not yet adequately solved, for policy decisions, the sensitivity of decisions to input uncertainty, the prioritization of data to reduce input uncertainty in a way that effectively, when to stop learnin g and when to make a system design decision, and how to model extreme events (e.g., heavy tails) that may have implications for the nonexistence of certain moments of interest. We will review a number of these as time permits.
INI 1
10:30 to 11:00 RCH Cheng (University of Southampton)
Screening for Important Inputs by Bootstrapping
We consider resampling techniques in two contexts involving uncertainty in input modelling. The first concerns the fitting of input models to input data. This is a problem of estimation and can be treated either parametrically or non-parametrically. In either case the problem of assessing uncertainty in the fitted input model arises. We discuss how resampling can be used to deal with this. The second problem concerns the situation where the simulation output depends on a large number input variables and the problem is to identify which input variables are important in influencing output behaviour. Again we discuss how resampling can be used to handle this problem. An interesting aspect of both problems is that the replications used in the resampling involved in both problems are mutually independent. This means that greatly increased processing speed is possible if replications can be carried out in parallel. Recent developments in computer architecture makes parallel implementation much more readily available. This has a particularly interesting consequence for handling input uncertainty when simulation is used in real time decision taking, where processing speed is paramount. We discuss this aspect especially in the context of real time system improvement, if not real time optimization.
INI 1
11:00 to 11:30 Coffee / Tea
11:30 to 12:00 R Barton (National Science Foundation)
Metamodels and the Bootstrap for Input Model Uncertainty Analysis
The distribution of simulation output statistics includes variation form the finiteness of samples used to construct input probability models. Metamodels and bootstrapping provide a way to characterize this error. The metamodel-fiting experiment benefits from a sequential design strategy. We describe the elements of such a strategy, and show how they impact performance.
INI 1
12:00 to 12:30 R V Joseph (Georgia Institute of Technology)
Coupled Gaussian Process Models
Gaussian Process (GP) models are commonly employed in computer experiments for modeling deterministic functions. The model assumes second-order stationarity and therefore, the predictions can become poor when such assumptions are violated. In this work, we propose a more accurate approach by coupling two GP models together that incorporates both the non-stationarity in mean and variance. It gives better predictions when the experimental design is sparse and can also improve the prediction intervals by quantifying the change of local variability associated with the response. Advantages of the new predictor are demonstrated using several examples from the literature.
INI 1
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:30 L House ([Virginia Tech])
Assessing simulator uncertainty using evaluations from several different simulators
Any simulator-based prediction must take account of the discrepancy between the simulator and the underlying system. In physical systems, such as climate, this discrepancy has a complex, unknown structure that makes direct elicitation very demanding. Here, we propose a fundamentally different framework to that currently in use and consider information in a collection of simulator-evaluations, known as a Multi-Model Ensemble (MME). We justify our approach both in terms of its transparency, tractability, and consistency with standard practice in, say, Climate Science. The statistical modelling framework is that of second-order exchangeability, within a Bayes linear treatment. We apply our methods based on a reconstruction of boreal winter surface temperature.
INI 1
14:30 to 15:00 H Lee (University of California, Santa Cruz)
Constrained Optimization and Calibration for Deterministic and Stochastic Simulation Experiments
Optimization of the output of computer simulators, whether deterministic or stochastic, is a challenging problem because of the typical severe multimodality. The problem is further complicated when the optimization is subject to unknown constraints, those that depend on the value of the output, so the function must be evaluated in order to determine if the constraint has been violated. Yet, even an invalid response may still be informative about the function, and thus could potentially be useful in the optimization. We develop a statistical approach based on Gaussian processes and Bayesian learning to approximate the unknown function and to estimate the probability of meeting the constraints, leading to a sequential design for optimization and calibration.
INI 1
15:00 to 15:30 Coffee / Tea
15:30 to 17:00 JPC Kleijnen & S Chick & S Sanchez
Panel: Input uncertainty and experimental robustness
INI 1
18:45 to 19:30 Dinner at Wolfson Court
Friday 9th September 2011
09:00 to 09:30 P Ranjan (Acadia University)
Interpolation of Deterministic Simulator Outputs using a Gaussian Process Model
For many expensive deterministic computer simulators, the outputs do not have replication error and the desired metamodel (or statistical emulator) is an interpolator of the observed data. Realizations of Gaussian spatial processes (GP) are commonly used to model such simulator outputs. Fitting a GP model to n data points requires the computation of the inverse and determinant of n x n correlation matrices, R, that are sometimes computationally unstable due to near-singularity of R. This happens if any pair of design points are very close together in the input space. The popular approach to overcome near-singularity is to introduce a small nugget (or jitter) parameter in the model that is estimated along with other model parameters. The inclusion of a nugget in the model often causes unnecessary over-smoothing of the data. In this talk, we present a lower bound on the nugget that minimizes the over-smoothing and an iterative regularization approach to construct a predictor th at further improves the interpolation accuracy. We also show that the proposed predictor converges to the GP interpolator.
INI 1
09:30 to 10:00 D Bingham (Simon Fraser University)
Calibration of multi-fidelity models for radiative shock
Environmental and economic industry relies on high-performance materials such as lightweight alloys, recyclable motor vehicle and building components, and high efficiency lighting. Material properties as expressed through crystal structure is crucial to this understanding. Based on first-principles calculations, it is still impossible in most materials to infer ground-state properties purely from a knowledge of their atomic components. Many methods attempt to predict crystal structures and compound stability, we explore models which infer the existence of structures on the basis of combinatorics and geometric simplicity. Computational models based on these first physics principles are called VASP codes. We illustrate the use of a statistical surrogate model to produce predictions of VASP codes as a function of a moderate number of VASP inputs.
INI 1
10:00 to 10:30 J Oakley (University of Sheffield)
Investigating discrepancy in computer model predictions
In most computer model predictions, there will be two sources of uncertainty: uncertainty in the choice of model input parameters, and uncertainty in how well the computer model represents reality. Dealing with the second source of uncertainty can be difficult, particularly when we have no field data with which to compare the accuracy of the model predictions. We propose a framework for investigating the "discrepancy" of the computer model output: the difference between the model run at its 'best' inputs and reality, which involves 'opening the black box' and considering structural errors within the model. We can then use sensitivity analysis tools to identify important sources of model error, and hence direct effort into improving the model. Illustrations are given in the field of health economic modelling.
INI 1
10:30 to 10:45 Discussion INI 1
11:00 to 11:30 Coffee / Tea
11:30 to 12:00 M Pratola (Los Alamos National Laboratory)
Bayesian Calibration of Computer Model Ensembles

Using field observations to calibrate complex mathematical models of a physical process allows one to obtain statistical estimates of model parameters and construct predictions of the observed process that ideally incorporate all sources of uncertainty. Many of the methods in the literature use response surface approaches, and have demonstrated success in many applications. However there are notable limitations, such as when one has a small ensemble of model runs where the model outputs are high dimensional. In such instances, arriving at a response surface model that reasonably describes the process can be dicult, and computational issues may also render the approach impractical.

In this talk we present an approach that has numerous beneifts compared to some popular methods. First, we avoid the problems associated with defining a particular regression basis or covariance model by making a Gaussian assumption on the ensemble. By applying Bayes theorem, the posterior distribution of unknown calibration parameters and predictions of the field process can be constructed. Second, as the approach relies on the empirical moments of the distribution, computational and stationarity issues are much reduced compared to some popular alternatives. Finally, in the situation that additional observations are arriving over time, our method can be seen as a fully Bayesian generalization of the popular Ensemble Kalman Filter.

INI 1
12:00 to 12:30 B Haaland ([Duke-NUS, NUS])
Accurate emulators for large-scale computer experiments
Large-scale computer experiments are becoming increasingly important in science. A multi-step procedure is introduced to statisticians for modeling such experiments, which builds an accurate interpolator in multiple steps. In practice, the procedure shows substantial improvements in overall accuracy but its theoretical properties are not well established. We introduce the terms nominal and numeric error and decompose the overall error of an interpolator into nominal and numeric portions. Bounds on the numeric and nominal error are developed to show theoretically that substantial gains in overall accuracy can be attained with the multi-step approach.
INI 1
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 15:30 HP Wynn & D Higdon & R Barton
Panel: Future Challenges Integrating Multiple Modes of Experimentation
INI 1
16:00 to 16:10 Conference Closing INI 1
16:10 to 17:30 Informal individual discussions
18:45 to 19:30 Dinner at Wolfson Court
University of Cambridge Research Councils UK
    Clay Mathematics Institute The Leverhulme Trust London Mathematical Society Microsoft Research NM Rothschild and Sons