Skip to content

Workshop Programme

for period 18 - 22 July 2011

Experiments for Processes with Time or Space Dynamics

18 - 22 July 2011

Timetable

Monday 18 July
09:00-10:45 Registration
10:45-11:15 Tea and Coffee
11:15-11:30 Opening remarks and welcome from Sir David Wallace (INI Director)
11:30-12:30 Macchietto, S (Imperial College London)
  Optimal model-based design for experiments: some new objective and constraint formulations Sem 1
 

The presentation will briefly review some of the reasons for the recent renewed interest in the Design of Experiments (DoE) and some key developments which, in the author's view and experience, underpin and enable this success. One is identified in the ability of combining classical DoE methods with substantially more sophysticated mathematical descriptions of the physics in the experiment being designed, thus putting the "model-based" firmly in front of DoE. Another, the main subject of the talk, is a better understanding of the relationship between desired performance and evaluation metric(s), leading to the disaggregation of a single "best" design objective into constituent components, and to much richer formulations of the design problem that can be tailored to specific situations. A final reason is the substantial improvement in the numerical and computing tools supporting the model-based design of experiments, but also and chiefly in the availaility of integrated modelling/solution environments which make the whole technology accessible to a much wider engineering community. The presentation will illustrate, with reference to examples, some of the new problem formulations that can be used to represents more sophysticated design requirements (including parameter precision, anti-correlation, robustness to uncertainty) and, briefly, some of the newer solution approaches (including design of parallel experiments, on-line re-design). It will also illustrate some successful applications in a variety of demanding industrial areas, ranging from fuel cells, to complex reactor design, to biomedical applications.

 
12:30-13:30 Lunch at Wolfson Court
14:00-15:00 Van Impe, JFM (Katholieke Universiteit Leuven)
  Optimal experimental design for nonlinear systems: Application to microbial kinetics identification Sem 1
 

Dynamic biochemical processes are omnipresent in industry, e.g., brewing, production of enzymes and pharmaceuticals. However, since accurate models are required for model based optimisation and measurements are often labour and cost intensive, Optimal Experiment Design (OED) techniques for parameter estimation are valuable tools to limit the experimental burden while maximising the information content. To this end, often scalar measures of the Fisher information matrix (FIM) are exploited in the objective function. In this contribution, we focus on the parameter estimation of nonlinear microbial kinetics. More specifically, the following issues are addressed: (1) Nonlinear kinetics. Since microbial kinetics is most often nonlinear, the unknown parameters appear explicitly in the design equations. Therefore, selecting optimal initialization values for these parameters as well as setting up a convergent sequential design scheme is of great importance. (2) Biological kinetics. Since we deal with models for microbial kinetics, the design of dynamic experiments is facing additional constraints. For example, upon applying a step change in temperature, an (unmodelled) lag phase is induced in the microbial population's response. To avoid this, additional constraints need to be formulated on the admissible gradients of the input profiles thus safeguarding model validity under dynamically changing environmental conditions. (3) Not only do different scalar measures of the FIM exist, but they may also be competing. For instance, the E-criterion tries to minimise the largest error, while the modified E-criterion aims at obtaining a similar accuracy for all parameters. Given this competing nature, a multi-objective optimisation approach is adopted for tackling these OED problems. The aim is to produce the set of optimal solutions, i.e., the so-called Pareto set, in order to illustrate the trade-offs to be made. In addition, combinations of parameter estimation quality and productivity related objectives are explored in order to allow an accurate estimation during production runs, and decrease down-time and losses due to modelling efforts. To this end, ACADO Multi-Objective has been employed, which is a flexible toolkit for solving dynamic optimisation or optimal control problems with multiple and conflicting objectives. The results obtained are illustrated with both simulation studies and experimental data collected in our lab.

 
15:00-15:30 Tea and Coffee
15:30-16:30 Schwabe, R (Otto-von-Guericke-Universität Magdeburg)
  Individuals are different: Implications on the design of experiments Sem 1
 

If dynamics is measured repeatedly in biological entities like human beings or animals, the diversity of individuals may have a crucial impact on the outcomes of the measurements. An adequate approach for this situation is to assume random coefficients for each individual. This leads to non-linear mixed models, which have attracted an increasing popularity in many fields of applications in recent years due to advanced computer facilities. In such studies main emphasis is laid to the estimation of population (location) parameters for the mean behaviour of the individuals, but besides that also interest may be in the prediction of further response for the specific individuals under investigation. Here we will indicate the problems and implications of this approach to the design of experiments and illustrate various consequences by the simple example of an exponential decay. However, it remains unsolved, what is the "correct" measure of performance of a design in this setting.

 
16:30-17:00 Mielke, T (Otto-von-Guericke-Universität Magdeburg)
  Optimal design for the estimation of population location parameters in nonlinear mixed effects models Sem 1
 

Nonlinear mixed effects models are frequently used in the analysis of grouped data. Specially in pharmacological studies the observed individuals usually share a common response structure, such that information from individual responses might be merged to obtain efficient estimates. The mixed effects Models can be used to model population studies by assuming the individual parameter vectors to be realizations of independently distributed random variables, what yields for nonlinear response functions of the individual parameters nontrivial models. Unfortunately, in nonlinear mixed effects models problems occur, as there exists no closed form representation of the likelihood-function of the observations and hence no closed form of the Fisher Information. Optimal designs in nonlinear mixed effects models are usually based on approximations of the Fisher Information, such that bad approximations might lead to bad experimental designs. In this talk we discuss different approaches for approximating the information matrix and the influence of the approximations on the implied designs in pharmacokinetic studies.

 
17:00-17:30 Pagendam, D (CSIRO Mathematics, Informatics and Statistics)
  Optimal experimental design for stochastic population models Sem 1
 

Markov population processes are popular models for studying a wide range of phenomena including the spread of disease, the evolution of chemical reactions and the movements of organisms in population networks (metapopulations). Our ability to use these models can effectively be limited by our knowledge about parameters, such as disease transmission and recovery rates in an epidemic. Recently, there has been interest in devising optimal experimental designs for stochastic models, so that practitioners can collect data in a manner that maximises the precision of maximum likelihood estimates of the parameters for these models. I will discuss some recent work on optimal design for a variety of population models, beginning with some simple one-parameter models where the optimal design can be obtained analytically and moving on to more complicated multi-parameter models in epidemiology that involve latent states and non-exponentially distributed infectious periods. For these more complex models, the optimal design must be arrived at using computational methods and we rely on a Gaussian diffusion approximation to obtain analytical expressions for the Fisher information matrix, which is at the heart of most optimality criteria in experimental design. I will outline a simple cross-entropy algorithm that can be used for obtaining optimal designs for these models. We will also explore some recent work on optimal designs for population networks with the aim of estimating migration parameters, with application to avian metapopulations.

 
17:30-18:30 Drinks Reception
Tuesday 19 July
09:00-10:00 Melas, VB (Saint-Petersburg State University)
  On sufficient conditions for implementing the functional approach Sem 1
 

Let us consider the general nonlinear regression model under standard assumptions on the experimental errors. Let also the following assumptions be fulfilled: (i) the regression function depends on a scalar variable belonging to the design interval, (ii) the derivatives of the function with respect to the parameters generate an extended Chebyshev system on the design interval, (iii) the matrix of second derivatives of the optimality criterion with respect to the different information matrix elements is positive definite. Then under non-restrictive assumptions it can be proved that the Jacobi matrix of the system of differential equations that defines implicitly support points and weight coefficients of the optimal design is invertible. This allows us to implement the Implicit Function Theorem for representing the points and the weights by a Taylor series. The corresponding theorems as well as particular examples of nonlinear models are elaborated. The results are generalisations of those given in the monograph published recently by the author.

 
10:00-11:00 Bezzo, F (Università degli Studi di Padova)
  Enhanced model-based experiment design techniques for parameter identification in complex dynamic systems under uncertainty Sem 1
 

A wide class of physical systems can be described by dynamic deterministic models expressed in the form of systems of differential and algebraic equations. Once a dynamic model structure is found adequate to represent a physical system, a set of identification experiments needs to be carried out to estimate the set of parameters of the model in the most precise and accurate way. Model-based design of experiments (MBDoE) techniques represent a valuable tool for the rapid assessment and development of dynamic deterministic models, allowing for the maximisation of the information content of the experiments in order to support and improve the parameter identification task. However, uncertainty in the model parameters or in the model structure itself or in the representation of the experimental facility may lead to design procedures that turn out to be scarcely informative. Additionally, constraints may occur to be violated, thus making the experiment unfeasible or even unsafe. Handling uncertainty is a complex and still open problem, although over the last years significant research effort has been devoted to tackle some issues in this area. Here, some approaches developed at CAPE-Lab at University of Padova will be critically discussed. First Online Model-Based Redesign of Experiment (OMBRE) strategies will be taken into account. In OMBRE the objective is to exploit the information as soon as soon as it is generated by the running experiment. The manipulated input profiles of the running experiment are updated by performing one or more intermediate experiment designs (i.e., redesigns), and each redesign is performed adopting the current value of the parameter set. In addition, a model updating policy including disturbance estimation embedded within an OMBRE strategy (DE-OMBRE) can be considered. In the DE-OMBRE approach, an augmented model lumping the effect of systematic errors is considered to estimate both the states and the system outputs in a given time frame, updating the constraint conditions in a consistent way as soon as the effect of unknown disturbances propagates in the system. Backoff-based MBDoE, where uncertainty is explicitly accounted for so as to plan a test that is both optimally informative and safe by design, is eventually discussed.

 
11:00-11:30 Tea and Coffee
11:30-12:30 López Fidalgo, J (Universidad de Castilla-la Mancha)
  Optimal experimental designs for stochastic processes whose covariance is a function of the mean Sem 1
 

Recent literature emphasizes, for the analysis of compartmental models, the need for models for stochastic processes whose covariance structure depends on the mean. Covariance functions must be positive definite and this fact is nontrivial and constitutes one of the challenges of the present work, for a stochastic process whose covariance is a function of the mean. We show that there exists a class of functions that, composed with the mean of the process, preserve positive definiteness and can be used for the purposes of the present talk. We offer some examples for an easy construction of such covariances and then study the problem of locally D-optimal design through both simulation studies as well as real data inherent to a radiation retention model in the human body.

 
12:30-13:30 Lunch at Wolfson Court
14:00-15:00 Zhigljavsky, A (Cardiff University)
  New approach to designing experiments with correlated observations Sem 1
 

I will review some results of an on-going joint research project with Holger Detter and Andrey Pepelyshev. In this project, we propose and develop a new approach to the problem of optimal design for regression experiments with correlated observations. This approach extends the well-known techniques of Bickel-Herzberg and covers the cases of long-range dependence in observations and different asymptotical relations between the number of observations and the size of the design space. In many interesting cases the correlations kernels become singular which implies that traditional methods are no longer applicable. In these cases, a potential theory can be used to derive optimality conditions and establish the existence and uniqueness of the optimal designs. In many instances the optimal designs can be explicitly computed.

 
15:00-15:30 Tea and Coffee
15:30-16:15 Harman, R (Comenius University)
  On exact optimal sampling designs for processes with a product covariance structure Sem 1
 

Assume a random process with a parametrized mean value and a Wiener covariance structure. For this model, we will exhibit three classes of mean value functions for which it is possible to find an explicit form of the exact optimal sampling design. We will also show that the optimum design problems with a product covariance structure can be transformed one into another. This gives us insight into relations of seemingly different optimal designs problems.

 
16:15-17:00 Bardow, A (RWTH Aachen University)
  Optimal experimental design for the well and the ill(-posed problems) Sem 1
 

The talk discusses both recent applications and extensions of model-based optimal experimental design (OED) theory for challenging problems motivated from chemical engineering. Despite the progress of advanced modeling and simulation methods, experiments will continue to form the basis of all engineering and science. Since experiments are usually require significant effort, best use of these resources should be made. Model-based optimal experimental design provides a rigorous framework to achieve this goal by determining the best settings for the experimental degrees of freedom for the question of interest. In this work, the benefits of applying optimal experimental methods will be demonstrated for the determination of physical properties in chemical engineering applications. In particular, the application to diffusion measurements is considered. Since diffusion is slow, current experiments tend to be very time-consuming. Recently, lab-on-a-chip technology brought the promise of speeding up the measurements due to a drastic decrease in characteristic distances and thus diffusion time. Here, a rigorous optimization of microfluidic experiments for the determination of diffusion coefficients is performed. The OED results are quantitatively validated in experiments showing that the accuracy in diffusion measurements can be increased by orders of magnitude while reducing measurement times to minutes. After discussing applications, extensions of classical OED methods are presented. In particular, the experimental design of ill-posed problems is considered. Here, classical design approaches lead to even qualitatively wrong designs whereas the recently introduced METER criterion allows for a sound solution. The METER criterion aims at the minimization of the expected total error and thereby captures the bias-variance trade-off in ill-posed problems. For the development of predictive models for physical properties, model discrimination and validation are critical steps. For this task, a rational framework is proposed to identify the components and mixtures that allow for optimal model discrimination. The proposed framework combines model-based methods for optimal experimental design with approaches from computer-aided molecular design (CAMD). By selecting the right mixtures to test, a targeted and more efficient approach towards predictive models for physical properties becomes viable.

 
17:00-17:30 Winterfors, E
  Bayesian optimization: A framework for optimal computational effort for experimental design Sem 1
 

DOE on models involving time or space dynamics is often very computationally demanding. Predicting a single experimental outcome may require significant computation, let alone evaluating a design criterion and optimizing it with respect to design parameters. To find the exact optimum of the design criterion would typically take infinite computation, and any finite computation will yield a result possessing some uncertainty (due to approximation of the design criterion as well as stopping the optimization procedure). Ideally, one would like to optimize not only the design criterion, but also the way it is approximated and optimized in order to get the largest likely improvement in the design criterion relative to the computational effort spent. Using a Bayesian method for the optimization of the design criterion (not only for calculating the design criterion) can accomplish such an optimal trade-off between (computational) resources spent planning the experiment and expected gain from carrying it out. This talk will lay out the concepts and theory necessary to perform a fully Bayesian optimization that maximizes the expected improvement of the design criterion in relation the computational effort spent.

 
Wednesday 20 July
09:00-10:00 Pronzato, L (Université de Nice Sophia Antipolis)
  Adaptive design and control Sem 1
 

There exist strong relations between experimental design and control, for instance in situations where optimal inputs are constructed in order to obtain precise parameter estimation in dynamical systems or when suitably designed perturbations are introduced in adaptive control to force enough excitation into the system. The presentation will focus on adaptive design when the construction of an optimal experiment requires the knowledge of the model parameters and current estimated values are substituted for unknown true values. This adaptation to estimated values creates dependency among observations and makes the investigation of the asymptotic behaviors of the design and estimator a much more complicated issue than when the design is specified independently of the observations. Also, even if the system considered is static, this adaptation introduces some feedback and the adaptive-design mechanism can be considered as a particular adaptive-control scheme. The role of experimental design in the asymptotic properties of estimators will be emphasized. The assumption that the set of experimental variables (design points) is finite facilitates the study of the asymptotic properties of estimators (strong consistency and asymptotic normality) in stochastic regression models. Two situations will be considered: adaptive D-optimal design and adaptive design with a cost constraint where the design should make a compromise between maximizing an information criterion (D-optimality) and minimizing a cost (function optimization). The case when the weight given to cost minimization asymptotically dominates will be considered in detail in connection with self-tuning regulation and self-tuning optimization problems.

 
10:00-10:30 Jauberthie, C (LAAS)
  Methodology and application of optimal input design for parameter estimation Sem 1
 

An optimal input design technique for parameter estimation is presented in this talk. The original idea is the combination of a dynamic programming method with a gradient algorithm for an optimal input synthesis. This approach allows us to include realistic practical constraints on the input and output variables. A description of this approach is presented, followed by an example concerning an aircraft longitudinal flight.

 
10:30-11:00 Skubalska-Rafajlowicz, E (Wroclaw University of Technology)
  Neural networks for nonlinear modeling of dynamic systems: Design problems Sem 1
 

We start from a brief review of artificial neural networks with external dynamics as models for nonlinear dynamic systems (NARX, NFIR). We discuss problems arising in designing of such networks. In particular, we put emphasis on active learning, i.e., on iterative improvements of the Fisher information matrix. Furthermore, we propose random projections (applied to input and/or output signals) for increasing the robustness of model selection process.

 
11:00-11:30 Tea and Coffee
11:30-12:30 Hjalmarsson, H (KTH - Royal Institute of Technology)
  Applications-oriented experiment design for dynamical systems Sem 1
 

In this talk we present a framework for applications-oriented experiment design for dynamic systems. The idea is to generate a design such that certain performance criteria of the application are satisfied with high probability. We discuss how to approximate this problem by a convex optimization problem and how to address Achilles' heel of optimal experiment design, i.e., that the optimal design depends on the true system. We also elaborate on how the cost of an identification experiment is related to the performance requirements of the application and the importance of experiment design in reduced order modeling. We illustrate the methods on some problems from control and systems theories.

 
12:30-13:30 Lunch at Wolfson court
14:00-15:00 Rafajlowicz, E (Wroclaw University of Technology)
  Optimal input signals for parameter estimation in distributed-parameter systems Sem 1
 

In the first part of the lecture we recall classical results on selecting optimal input signals for parameter estimation in systems with temporal (or spatial) dynamics only and their generalizations to unbounded signals. As a motivation for studying input signals, which can influence our system both in space and in time, we provide several examples of new techniques emerged in high energy lasers and in micro- and nano-technologies. We also mention an increasing role of cameras as sensors. Then, we discuss extensions of optimality conditions for input signals, trying to reveal an interplay between their spatial and temporal behavior. We concentrate on open loop input signals for linear systems, described by partial differential equations (PDE) or their Green's functions. Finally, we sketch the following open problems: (i) simultaneous optimization of sensor positions and input signals, (ii) experiment design for estimating spatially varying coefficients of PDEs.

 
15:00-15:30 Tea and Coffee
15:30-16:15 Körkel, S (Ruprecht-Karls-Universität Heidelberg)
  Numerical methods and application strategies for optimum experimental design for nonlinear differential equation models Sem 1
 

We consider dynamic processes which are modeled by systems of nonlinear differential equations. Usually the models contain parameters of unknown quantity. To calibrate the models, the parameters have to be estimated from experimental data. Due to the uncertainty of data, the resulting parameter estimate is random. Its uncertainty can be described by confidence regions and the relevant variance-covariance matrix. The statistical significance of the parameter estimation can be maximized by minimizing design criteria defined on the variance-covariance matrix with respect to controls describing layout and processing of experiments and subject to constraints on experimental costs and operability. The resulting optimum experimental design problems are constrained non-standard optimal control problems whose objective depends implicitly on the derivatives of the model states with respect to the parameters. For a numerical solution we have developed methods based on the direct approach of optimal control, on quasi-Newton methods for nonlinear optimization, and on the efficient integration and differentiation of differential equations. To use experimental design for practical problems, we have developed strategies including robustification, multiple experiment formulations, a sequential strategy and an on-line approach. Application examples show that optimally designed experiments yield information about processes much more reliable, much faster and at a significantly lower cost than trial-and-error or black-box approaches. We have implemented our methods in the software package VPLAN which is applied to practical problems from several partners from different fields like chemistry, chemical engineering, systems biology, epidemiology and robotics. In this talk we formulate experimental design problems, present numerical methods for the solution, discuss application strategies and give application examples from practice.

 
16:15-17:00 Biedermann, SGM (University of Southampton)
  Optimal design for inverse problems Sem 1
 

In many real life applications, it is impossible to observe the feature of interest directly. For example, non-invasive medical imaging techniques rely on indirect observations to reconstruct an image of the patient's internal organs. We investigate optimal designs for such inverse problems. We use the optimal designs as benchmarks to investigate the efficiency of designs commonly used in applications. Several examples are discussed for illustration. Our designs provide guidelines to scientists regarding the experimental conditions at which the indirect observations should be taken in order to obtain an accurate estimate for the object of interest.

 
17:00-17:30 Bejan, A (University of Cambridge)
  Bayesian experimental design for percolation and other random graph models Sem 1
 

The problem of optimal arrangement of nodes of a random graph will be discussed in this workshop. The nodes of graphs under study are fixed, but their edges are random and established according to the so called edge-probability function. This function may depend on the weights attributed to the pairs of graph nodes (or distances between them) and a statistical parameter. It is the purpose of experimentation to make inference on the statistical parameter and, thus, to learn about it as much as possible. We also distinguish between two different experimentation scenarios: progressive and instructive designs. We adopt a utility-based Bayesian framework to tackle this problem. We prove that the infinitely growing or diminishing node configurations asymptotically represent the worst node arrangements. We also obtain the exact solution to the optimal design problem for proximity (geometric) graphs and numerical solution for graphs with threshold edge-probability functions. We use simulation based optimisation methods, mainly Monte Carlo and Markov Chain Monte Carlo, in order to obtain solution in the general case. We study the optimal design problem for inference based on partial observations of random graphs by employing data augmentation technique. In particular, we consider inference and optimal design problems for finite open clusters from bond percolation on the integer lattices and derive a range of both numerical and analytical results for these graphs. (Our motivation here is that open clusters in bond percolation may be seen as final outbreaks of an SIR epidemic with constant infectious times.) We introduce inner-outer design plots by considering a bounded region of the lattice and deleting some of the lattice nodes within this region and show that the 'mostly populated' designs are not necessarily optimal in the case of incomplete observations under both progressive and instructive design scenarios. Some of the obtained results may generalise to other lattices.

 
19:30-22:00 Conference Dinner at Emmanuel College
Thursday 21 July
09:00-09:45 Ucinski, D (University of Zielona Góra)
  Sensor network scheduling for identification of spatially distributed processes Sem 1
 

Since for distributed parameter systems it is impossible to observe their states over the entire spatial domain, the question arises of where to locate discrete sensors so as to estimate the unknown system parameters as accurately as possible. Both researchers and practitioners do not doubt that making use of sensors placed in an `intelligent' manner may lead to dramatic gains in the achievable accuracy of the parameter estimates, so efficient sensor location strategies are highly desirable. In turn, the complexity of the sensor location problem implies that there are very few sensor placement methods which are readily applicable to practical situations. What is more, they are not well known among researchers. The aim of the talk is to give account of both classical and recent original work on optimal sensor placement strategies for parameter identification in dynamic distributed systems modelled by partial differential equations. The reported work constitutes an attempt to meet the needs created by practical applications, especially regarding environmental processes, through the development of new techniques and algorithms or adopting methods which have been successful in akin fields of optimal control and optimum experimental design. While planning, real-valued functions of the Fisher information matrix of parameters are primarily employed as the performance indices to be minimized with respect to the positions of pointwise sensors. Extensive numerical results are included to show the efficiency of the proposed algorithms. A couple of case studies regarding the design of air quality monitoring networks and network design for groundwater pollution problems are adopted as an illustration aiming at showing the strength of the proposed approach in studying practical problems.

 
09:45-10:30 Patan, M (University of Zielona Góra)
  Resource-limited mobile sensor routing for parameter estimation of distributed systems Sem 1
 

The problem of determining optimal observation strategies for identification of unknown parameters in distributed-parameter system is discussed. Particularly, a setting where the measurement process is performed by collecting spatial data from mobile nodes with sensing capacity forming an organized network is considered. The framework is based on the use of a criterion defined on the Fisher information matrix associated with the estimated parameters as a measure of the information content in the measurements. Motivations stem from engineering practice, where the clusterization of measurements at some spatial positions and at a given time moment often leads to a decrease in the robustness of the observational system to the model misspecification. Furthermore, there are some technical limitations imposed on the sensor paths in order to avoid collisions, satisfy energy constraints and/or provide a proper deployment of mobile sensor nodes. The approach is to convert the problem to a canonical optimal control one in which the control forces of the sensors may be optimized. Then, through an adaptation of some pairwise communication algorithms, a numerical scheme is developed, which decomposes the resulting problem and distributes the computational burden between network nodes. Numerical solutions are then obtained using widespread powerful numerical packages which handle various constraints imposed on the node motions. As a result, an adaptive scheme is outlined to determine guidance policies for network nodes in a decentralized fashion.

 
10:30-11:00 Carraro, T (Ruprecht-Karls-Universität Heidelberg)
  From parametric optimization to optimal experimental design: A new perspective in the context of partial differential equations Sem 1
 

We propose a new perspective of the optimal experimental design problem (OED), whose several theoretical and computational aspects have been previously studied. The formal setting of parametric optimization leads to the definition of a generalized framework from which the OED problem can be derived. Although this approach does not have a direct impact on the computational aspects, it links the OED problem to a wider field of theoretical results ranging from optimal control problems to the stability of optimization problems. Following this approach, we derive the OED problem in the context of partial differential equations (PDE) and present a primal-dual active set strategy to solve the constrained OED problem. Numerical examples are presented.

 
11:00-11:30 Tea and Coffee
11:30-12:30 Gibson, GJ (Heriot-Watt University)
  Bayesian experimental design for stochastic dynamical models Sem 1
 

Advances in Bayesian computational methods have meant that it is now possible to fit a broad range of stochastic, non-linear dynamical models (including spatio-temporal formulations) within a rigorous statistical framework. In epidemiology these methods have proved particularly valuable for producing insights into transmission dynamics on historical epidemics and for assessing potential control strategies. On the other hand, there has been less attention paid to the question how future data should be collected most efficiently for the purpose of analysis with these models. This talk will describe how the Bayesian approach to experimental design can be applied with standard epidemic models in order to identify the most efficient manner for collecting data to provide information on key rate parameters. Central to the approach is the representation of the design as a 'parameter' in an extended parameter space with the optimal design appearing as the marginal mode for an appropriately specified joint distribution. We will also describe how approximations, derived using moment-closure techniques, can be applied in order to make tractable the computational of likelihood functions which, given the partial nature of the data, would be prohibitively complex using methods such as data augmentation. The talk will illustrate the ideas in the context of designing microcosm experiments to study the spread of fungal pathogens in agricultural crops, where the design problem relates to the particular choice of sampling times used. We will examine the use of utility functions based entirely on information measures that quantify the difference between prior and posterior parameter distributions, and also discuss how economic factors can be incorporated in the construction of utilities for this class of problems. The talk will demonstrate how, if sampling times are appropriately selected, it may be possible to reduce drastically the amount of sampling required in comparison to designs currently used, without compromising the information gained on key parameters. Some challenges and opportunities for future research on design with stochastic epidemic models will also be discussed.

 
12:30-13:30 Lunch at Wolfson Court
14:00-15:00 Müller, WG (Johannes Kepler Universität)
  Spatial design criteria and space-filling properties Sem 1
 

Several papers have recently strengthened the bridge connecting geostatistics and spatial econometrics. For these two fields various criteria have been developed for constructing optimal spatial sampling designs. We will explore relationships between these types of criteria as well as elude to space-filling or not space-filling properties.

 
15:00-15:30 Tea and Coffee
15:30-16:15 Stehlík, M (Johannes Kepler Universität)
  Optimal design and properties of correlated processes with semicontinuous covariance Sem 1
 

Semicontinuous covariance functions have been used in regression and kriging by many authors. In a recent work we introduced purely topologically defined regularity conditions on covariance kernels which are still applicable for increasing and infill domain asymptotics for regression problems and kriging. These conditions are related to the semicontinuous maps of Ornstein Uhlenbeck Processes. Thus these conditions can be of benefit for stochastic processes on more general spaces than the metric ones. Besides, the new regularity conditions relax the continuity of covariance function by consideration of a semicontinuous covariance. We discuss the applicability of the introduced topological regularity conditions for optimal design of random fields. A stochastic process with parametrized mean and covariance is observed over a compact set. The information obtained from observations is measured through the information functional (defined on the Fisher information matrix). We start with discussion on the role of equidistant designs for the correlated process. Various aspects of their prospective optimality will be reviewed and some issues on designing for spatial processes will be also provided. Finally we will concentrate on relaxing the continuity of covariance. We will introduce the regularity conditions for isotropic processes with semicontinuous covariance such that increasing domain asymptotics is still feasible, however more flexible behavior may occur here. In particular, the role of the nugget effect will be illustrated and practical application of stochastic processes with semicontinuous covariance will be given.

 
16:15-16:30 Poster Storm
16:30-17:30 Poster Session
Friday 22 July
09:00-10:00 Curtis, A (University of Edinburgh)
  Advances in nonlinear geoscientific experimental and survey design Sem 1
 

Geoscience is replete with inverse problems that must be solved routinely. Many such problems such as using satellite remote-sensing data to estimate properties of the Earth's surface, or solving Geophysical imaging and monitoring problems for potentially dynamic properties of the Earth's subsurface, involve large datasets that cost millions of dollars to collect. Optimising the information content of such data is therefore crucial. While linearised experimental design methods have been deployed within the Geosciences, most Geophysical problems are significantly nonlinear. This renders linearised design criteria invalid as they can significantly over- or under-estimate the information content of any dataset. Over the past few years we have therefore focussed on developing new nonlinear design methods that can be applied to practical data types and geometries for surveys of increasing size. We will summarise three advances in practical nonlinear design, one using a new design criterion applied in the data space, one using a new 'bi-focal' model space criterion, and one using a fast Monte Carlo refinement procedure that significantly speeds up nonlinear design calculations. Applications of the first two techniques are to design subsurface (micro-)seismic energy-source location problems, application of the third is to design so-called industrial seismic amplitude-versus-offset data sets to derive (an)elastic properties of subsurface geological strata. Using the first of these we managed to design an industrially practical Geophysical survey design using fully non-linearised methods.

 
10:00-11:00 Wilkinson, PB (British Geological Survey (BGS))
  SMART: Progress towards an optimising time-lapse geoelectrical imaging system Sem 1
 

Electrical resistivity tomography (ERT) is a widely-used geophysical technique for shallow subsurface investigations and monitoring. A range of automatic multi-electrode ERT systems, both commercial and academic, are routinely used to collect resistivity data sets that cover large survey areas at high spatial and temporal density. But despite the flexibility of these systems, the data still tend to be measured using traditional arrangements of electrodes. Recent research by several international groups has highlighted the possibility of using automatically generated survey designs which are optimised to produce the best possible tomographic image resolution given the limitations of time and practicality required to collect and process the data. Here we examine the challenges of applying automated ERT survey design to real experiments where resistivity imaging is being used to monitor subsurface processes. Using synthetic and real examples we address the problems of avoiding electrode polarisation effects, making efficient use of multiple simultaneous measurement channels, and making optimal measurements in noisy environments. These are essential steps towards implementing SMART (Sensitivity-Modulated Adaptive Resistivity Tomography), a robust self-optimising ERT monitoring system. We illustrate the planned design and operation of the SMART system using a simulated time-lapse experiment to monitor a saline tracer. The results demonstrate the improvements in image resolution that can be expected over traditional ERT monitoring.

 
11:00-11:30 Tea and Coffee
11:30-12:30 Wynn, HP (London School of Economics)
  Information-based methods in dynamic learning Sem 1
 

The history of information/entropy in learning due to Blackwell, Renyi, Lindley and others is sketched. Using results of de Groot, with new proofs, we arrive at a general class of information functions which gives "expected" learning in the Bayes sense. It is shown how this is intimately connected with the theory of majorization: learning means a more peaked distribution in a majorization sense. Counter-examples show that in some real situations it is possible to un-learn in the sense of having a less peaked posterior than prior. This does not happen in the standard Gaussian case, but does in cases such as the Beta-mixed binomial. Applications are made to experimental design. With designs for non-linear and dynamic system an idea of "local learning" is defined, in which the above theory is applied locally. Some connection with ideas of "active learning" in the machine learning area is attempted.

 
12:30-13:30 Lunch at Wolfson Court

Back to top ∧