Scientific Advisory Committee: Anthony Atkinson (LSE), Kathryn Chaloner (Iowa), Steve Gilmour (Queen Mary), Sue Lewis (Southampton), David Steinberg (Tel Aviv) and Henry Wynn (LSE).
Design of Experiments is a fundamental part of the knowledge discovery process in science and engineering, and has impact in a wide variety of fields. Long-standing principles of experimentation, such as randomisation, replication and blocking have become standards of best practice in many areas. In return, stimulus for novel research in the Design of Experiments comes from advances in technology and techniques in application fields important to science and society. These advances often result in complex experiments, for example, with large numbers of factors, many levels of randomisation, or with the aim of estimating or discriminating between nonlinear models. When these experiments cannot be designed using established methods, they motivate methodological and theoretical advances in the design of experiments. Many recent advances in the subject have come from just this synergy between application and methodology.
DEMA2008 will bring together researchers and practitioners for the interchange of new ideas on the design and analysis of experiments. The workshop will emphasise both methodology and application areas and should be of interest to scientists and engineers who use experiments, as well as to statisticians.
This workshop will be held in the final week of a month-long research programme at the Newton Institute and will provide a forum for the sharing of new ideas and advances from this programme with a wider audience. The workshop will have themes which reflect those of the research programme; in particular, new methodology for the design of genetics and proteomics studies, computer experiments and clinical trials, and sessions on more general design topics. A theme day on each of these three applied topics will be organised and participation from scientists in each field is strongly encouraged.
Experiments in Genomics and Proteomics (12 August 2008)
The cells in living organisms accomplish their functions by producing specific proteins. In order to identify the genes regulating this process high-troughput technologies have been developed over the past decade. In particular, microarray experiments for simultaneously studying the differential expression of thousands of genes are routinely used by now. Similarly, proteomics experiments analyse profiles from gas spectrometry to understand how genes are translated into proteins. In parallel with the advent of the new technologies the question of design has attracted considerable attention. The corresponding design problems are varied and often require the development of new theory or interpreting existing results in an entirely new way.
Clinical Trials (14 August 2008)
Clinical trials are experiments on living animals and humans to explore a proposed treatment for a disease and to obtain a licence for the commercial use of the treatment on non-experimental patients.
The experimental programme is often described as having several phases. Assessment of the treatment starts with healthy individuals to check the absence of adverse reactions. Experiments are then designed to determine the dosage, sometimes constrained by a need to avoid toxicity. The large Phase III trials are used to determine the advantages, if any , of the new treatment often by comparison with existing treatments. Randomization and balance over covariates are especially important in this phase.
Regulatory authorities, such as the American FDA, are rightly concerned that the designs have specified properties and are analysed in a pre-specified way. Sample size and power are two important characteristics.
However, a too rigid analytical straightjacket can conflict with the efficiency of statistical procedures if unexpected evidence becomes available during a trial. Protocols for interim analyses or changes in emphasis have to be tightly specified, in order that biases are avoided and test size and power maintained.
Particularly with human patients there is also the ethical desire to treat as many patients as possible with the better treatments. This leads to group sequential and response adaptive designs where the allocations change as responses on earlier patients become available. Controlling the properties of such trials is a complex task.
Computer Experiments (15 August 2008)
Computer experiments refers to the design and analysis of experiments on complex computer codes. An increasing number of systems are modelled via deterministic computer models, particularly where real experiments are unfeasible (too expensive, dangerous or just not possible). Examples of areas involved are engineering, environmental and climate models, geophysical models, chemical dynamics and disease models. The computer models are usually based on complex mathematics, for example, nonlinear partial differentiation equations or finite element models, with many different inputs. The output is then essentially the ouput from a solver, for example based on the finite element method. To explore and understand the models, experiments are performed where design consists of combinations of values of the input variables. As the response is deterministic, many of the standard principles of design (such as randomisation and replication) no longer apply. A new discipline of the statistical design and analysis of these experiments has developed over the last 20 years.
|Fisher Memorial Trust|