Skip to content

DOE

Seminar

A sequential methodology for integrating physical and computer experiments

Romano, D (Cagliari)
Friday 15 August 2008, 15:00-15:30

Seminar Room 1, Newton Institute

Abstract

In advanced industrial sectors, like aerospace, automotive, microelectronics and telecommunications, intensive use of simulation and lab trials is already a daily practice in R\&D activities. In spite of this, there still is no comprehensive approach for integrating physical and simulation experiments in the applied statistical literature. Computer experiments, an autonomous discipline since the end of the eighties (Sacks et al., 1989, Santner et al., 2003), provides a limited view of what a "computer experiment" can be in an industrial setting (computer program is considered expensive to run and its output strictly deterministic) and has practically ignored the "integration" problem. Existing contributions mainly address the problem of calibrating the computer model basing on field data. Kennedy and O'Hagan (2001) and Bayarri et al.(2007) introduced a fully Bayesian approach for modeling also the bias between the computer model and the physical data, thus addressing also model validation, i.e. assessing how well the model represents reality. Nevertheless, in this body of research the role of physical observations is ancillary: they are generally a few and not subject to design.

In the fifties, Box and Wilson (1951) provided a framework, which they called sequential experimentation, for improving industrial systems by physical experiments. Knowledge on the system is built incrementally by organising the investigation as a sequence of related experiments with varying scope (screening, prediction, and optimisation).

A first attempt to introduce such a systemic view in the context of integrated physical and computer experiments is presented in the paper. We envisage a sequential approach where both physical and computer experiments are used in a synergistic way with the goals of improving a real system of interest and validating/improving the computer model. The whole process and stops when a satisfactory level of improvement is realised.

It is important to point out that the two sources of information have a distinct role as they produce information with different degrees of cost (speed) and reliability. In a typical situation where the simulator is cheaper (faster) and the physical set-up is more reliable, it is sensible to use simulation experiments for exploring the space of the design variables in depth in order to get innovative findings, and to use a moderate amount of the costly physical trials for the verification of the findings. If findings obtained by simulation are not confirmed in the field, the computer code should be revised accordingly.

Different decision levels are handled within the framework. High level decisions are whether to stop or continue, whether to conduct the next experiment on the physical system or on its simulator and which is the purpose of the experiment (exploration, improvement, confirmation, model validation). Intermediate level decisions are the location of the experimental region and the run size. L

Presentation

[ppt ]

Audio

MP3MP3

Video

The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.

Back to top ∧