skip to content
 

Timetable (DAEW07)

Design and Analysis of Experiments in Healthcare

Monday 6th July 2015 to Friday 10th July 2015

Monday 6th July 2015
09:00 to 09:45 Registration
09:45 to 10:00 Welcome by John Toland (INI Director) and Rosemary Bailey (St Andrews) INI 1
10:00 to 10:45 Designing an adaptive trial with treatment selection and a survival endpoint
We consider a clinical trial in which two versions of a new treatment are compared against control with the primary endpoint of overall survival. At an interim analysis, mid-way through the trial, one of the two treatments is selected, based on the short term response of progression-free survival. For such an adaptive design the familywise type I error rate can be protected by use of a closed testing procedure to deal with the two null hypotheses and combination tests to combine data from before and after the interim analysis. However, with the primary endpoint of overall survival, there is still a danger of inflating the type I error rate: we present a way of applying the combination test that solves this problem simply and effectively. With the methodology in place, we then assess the potential benefits of treatment selection in this adaptive trial design.
INI 1
10:45 to 11:15 Morning Coffee
11:15 to 12:00 Adaptive dose-finding with power control
A main objective in dose-finding trials besides the characterisation of dose-response relationship is often to prove that the drug has effect. The sample size calculation prior to the trial aims to control the power for this effect proof. In most cases however, there is great uncertainty concerning the anticipated effect of the drug during planning. Sample size re-estimation based on an unblinded interim effect estimate has been used in this situation. In practice, this re-estimation can have drawbacks as sample size becomes variable which makes planning and funding complicated. Further it introduces the risk that people start to speculate about the effect once the re-estimated sample size is communicated. In this talk, we will investigate a design which avoids this problem but controls the power for the effect proof in an adaptive way. We discuss methods for proper statistical inference for the described design.
INI 1
12:00 to 12:30 Start-up designs for response-adpative randomization procedures
Response-adaptive randomization procedures are appropriate for clinical trials in which two or more treatments are to be compared, patients arrive sequentially and the response of each patient is recorded before the next patient arrives. For those procedures which involve sequential estimation of model parameters, start-up designs are commonly used in order to provide initial estimates of the parameters. In this talk a suite of such start-up designs for two treatments and binary patient responses are considered and compared in terms of the numbers of patients required in order to give meaningful parameters estimates, the number of patients allocated to the better treatment and the bias in the parameter estimates. It is shown that permuted block designs with blocks of size 4 are to be preferred over a wide range of parameter values. Results from a simulation study involving complete trials will also be reported.
INI 1
12:30 to 13:30 Lunch at Wolfson Court
13:30 to 14:15 An adaptive optimal design with a small fixed stage-one sample size
A large number of experiments in clinical trials, biology, biochemistry, etc. are, out of necessity, conducted in two stages. A first-stage experiment (a pilot study) is often used to gain information about feasibility of the experiment or to provide preliminary data for grant applications. We study the theoretical statistical implications of using a small sample of data (1) to design the second stage experiment and (2) in combination with the second-stage data for data analysis. To illuminate the issues, we consider an experiment under a non-linear regression model with normal errors. We show how the dependency between data in the different stages affects the distribution of parameter estimates when the first-stage sample size is fixed and finite; letting the second stage sample size go to infinity, maximum likelihood estimates are found to have a mixed normal distribution.
INI 1
14:15 to 15:00 Some critical issues in adaptive experimental designs for treatment comparisons
The study of adaptive designs has received impetus over the past 20 years, mainly because of its potential in applications to clinical trials, and a large number of these designs is now available in the statistical and biomedical literature, starting from simple ones like Efron’s Biased Coin Design and Zelen’s Play-the-Winner, to the more sophisticated ones: D_A-optimum Biased Coin designs, Doubly-Adaptive designs, ERADE, Randomly Reinforced Urns, Reinforced Doubly-Adaptive Biased Coins, etc., not forgetting covariates (Covariate-Adjusted Response-Adaptive, Covariate-Adaptive Biased Coin, Covariate-adjusted Doubly-adaptive Biased Coin designs, etc.).

A complicating factor is that nowadays adaptive experiments are in general multipurpose: they try to simultaneously achieve inferential efficiency, bias avoidance, and utilitarian or ethical gains. Another is the very nature of an adaptive design, which is a random experiment that may or may not converge to a fixed treatment allocation.

This talk does not intend to be a survey of the existing literature, rather it is an effort to highlight potentially critical points: inference after an adaptive experiment, combined versus constrained optimization, speed of convergence to a desired target, possible misuse of simulations. Each of these points will be discussed with reference to one or more specific adaptive designs.

INI 1
15:00 to 15:30 Afternoon Tea
15:30 to 16:15 V Dragalin (Johnson & Johnson PRD)
Adaptive population enrichment designs
There is a growing interest among regulators and sponsors in using precision medicine approaches that allow for targeted patients to receive maximum benefit from the correct dose of a specific drug. Population enrichment designs offer a specific adaptive trial methodology to study the effect of experimental treatments in various sub-populations of patients under investigation. Instead of limiting the enrolment only to the enriched population, these designs enable the data-driven selection of one or more pre-specified subpopulations at an interim analysis and the confirmatory proof of efficacy in the selected subset at the end of the trial. In this presentation, the general methodology and designing issues when planning such a design will be described and illustrated using two case studies.
INI 1
16:15 to 17:00 A population-finding design with non-parametric Bayesian response model
Targeted therapies on the basis of genomic aberrations analysis of the tumor have become a mainstream direction of cancer prognosis and treatment. Studies that match patients to targeted therapies for their particular genomic aberrations, across different cancer types, are known as basket trials. For such trials it is important to find and identify the subgroup of patients who can most benefit from an aberration-specific targeted therapy, possibly across multiple cancer types.

We propose an adaptive Bayesian clinical trial design for such subgroup identification and adaptive patient allocation. We start with a decision theoretic approach, then construct a utility function and a flexible non-parametric Bayesian response model. The main features of the proposed design and population finding methods are that we allow for variable sets of covariates to be recorded by different patients and, at least in principle, high order interactions of covariates. The separation of the decision problem and the probability model allows for the use of highly flexible response models. Another important feature is the adaptive allocation of patients to an optimal treatment arm based on posterior predictive probabilities. The proposed approach is demonstrated via extensive simulation studies.

INI 1
17:00 to 18:00 Welcome Wine Reception
Tuesday 7th July 2015
09:00 to 17:30 Population Optimum Design of Experiments
09:30 to 10:00 Registration
10:00 to 10:05 Welcome
10:05 to 10:30 10 years of progress in population design methodology and applications INI 1
10:30 to 11:00 S Leonov & T Mielke (ICON Clinical Research)
Optimal design and parameter estimation for population PK/PD models
In this presentation we discuss methods of model-based optimal experimental design that are used in population pharmacokinetic/pharmacodynamic studies and focus on links between various parameter estimation methods for nonlinear mixed effects models and various options for approximation of the information matrix in optimal design algorithms.
INI 1
11:00 to 11:30 Morning Coffee
11:30 to 12:00 Evaluation of the Fisher information matrix in nonlinear mixed effects models using Monte Carlo Markov Chains
For the analysis of longitudinal data, and especially in the field of pharmacometrics, nonlinear mixed effect models (NLMEM) are used to estimate population parameters and the interindividual variability. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trials simulations. Until recently, the FIM in NLMEM was mostly evaluated with first-order linearization (FO). We propose an approach to evaluate the exact FIM using Monte Carlo (MC) approximation and Markov Chains Monte Carlo (MCMC). Our approach is applicable to continuous as well as discrete data and was implemented in R using the probabilistic programming language STAN. This language enables to efficiently draw MCMC samples and to calculate the partial derivatives of the conditional log-likelihood directly from the model. The method requires several minutes for a FIM evaluation but yields an asymptotically exact FIM. Furthermore, computation time remains similar even for complex models with many parameters. We compare our approach to clinical trials simulation for various continuous and discrete examples.
INI 1
12:00 to 12:30 Computation of the Fisher information matrix for discrete nonlinear mixed effects models
Despite an increasing use of optimal design methodology for non-linear mixed effect models (NLMEMs) during the clinical drug development process (Mentr´e et al., 2013), examples involving discrete data NLMEMs remain scarce (Ernest et al., 2014). One reason are the limitations of existing approaches to calculate the Fisher information matrix (FIM) which are either model dependent and based on linearization (Ogungbenro and Aarons, 2011) or computationally very expensive (Nyberg et al., 2009). The main computational challenges in the computation of the FIM for discrete NLMEMs evolve around the calculation of two integrals. First the integral required to calculate the expectation over the data and second the integral of the likelihood over the distribution of the random effects. In this presentation Monte-Carlo (MC), Latin-Hypercube (LH) and Quasi-Random (QR) sampling for the calculation of the first as well as adaptive Gaussian quadrature (AGQ) and QR sampling for the calculation of the second integral are proposed. The resulting methods are compared for a number of discrete data models and evaluated in the context of model based adaptive optimal design.
INI 1
12:30 to 14:00 Sandwich lunch at INI
14:00 to 14:30 Designs for generalized linear models with random block effects
For an experiment measuring independent discrete responses, a generalized linear model, such as the logistic or log-linear, is typically used to analyse the data. In blocked experiments, where observations from the same block are potentially correlated, it may be appropriate to include random effects in the predictor, thus producing a generalized linear mixed model. Selecting optimal designs for such models is complicated by the fact that the Fisher information matrix, on which most optimality criteria are based, is computationally expensive to evaluate. In addition, the dependence of the information matrix on the unknown values of the parameters must be overcome by, for example, use of a pseudo-Bayesian approach. For the first time, we evaluate the efficiency, for estimating conditional models, of optimal designs from closed-form approximations to the information matrix, derived from marginal quasilikelihood and generalized estimating equations. It is found that, for binary-response models, naive application of these approximations may result in inefficient designs. However, a simple correction for the marginal attenuation of parameters yields much improved designs when the intra-block dependence is moderate. For stronger intra-block dependence, our adjusted marginal modelling approximations are sometimes less effective. Here, more efficient designs may be obtained from a novel asymptotic approximation. The use of approximations from this suite reduces the computational burden of design search substantially, enabling straightforward selection of multi-factor designs.
INI 1
14:30 to 15:00 Design and Analysis of In-Vitro Pharmacokinetic Experiments
In many pharmacokinetic experiments, the main goal is to identify enzymes that are related to the metabolic process of a substrate of interest. Since most of the enzymes that are involved in drug metabolism are located in the liver, human liver microsomes (HLMs) are used in these in vitro experiments. Experimental runs are conducted for each HLM at different levels of substrate concentration and the response, the initial rate of reaction, is measured from each experimental run. The relationship between such a response and the substrate concentration is usually nonlinear and so it is assessed from the size of the nonlinear regression parameters. However, the use of different HLMs requires additional random effects and there might also be covariate information on these HLMs. A further complication is uncertainty about the error structure of the resulting nonlinear mixed model. Methods for obtaining optimal designs for such models will be described. The resulting designs will be compared with the larger designs used in current practice. It will be shown that considerable savings in experimental time and effort are possible. Practical issues around the design and analysis will be discussed, along with suggestions of how the methods are best implemented.
INI 1
15:00 to 15:30 Incorporating pharmacokinetic information in phase I studies in small populations
Objectives: To review and extend existing methods which take into account PK measurements in sequential adaptive designs for early dose-finding studies in small populations, and to evaluate the impact of PK measurements on the selection of the maximum tolerated dose (MTD). Methods: This work is set in the context of phase I dose-finding studies in oncology, where the objective is to determine the MTD while limiting the number of patients exposed to high toxicity. We assume toxicity to be related to a PK measure of exposure, and consider 6 possible dose levels. Three Bayesian phase I methods from the literature were modified and compared to the standard continual reassessment method (CRM) through simulations. In these methods PK measurement, more precisely the AUC, is present as covariate for a link function of probability of toxicity (Piantadosi and Liu (1996), Whitehead et al. (2007)) and/or as dependent variable in linear regression versus dose (Patterson et al. (1999), Whitehead et al. (2007)). We simulated trials based on a model for the TGF- inhibitor LY2157299 in patients with glioma (Gueorguieva et al., 2014). The PK model was reduced to a one-compartment model with first-order absorption as in Lestini et al. (2014) in order to achieve a close solution for the probability of toxicity. Toxicity was assumed to occur when the value of a function of AUC was above a given threshold, either in the presence or without inter-individual variability (IIV). For each scenario, we simulated 1000 trials with 30, 36 and 42 patients. Results: Methods which incorporate PK measurements had good performance when informative prior knowledge was available in term of Bayesian prior distribution on parameters. On the other hand, keeping fixed the priors information, methods that included PK values as covariate were less flexible and lead to trials with more toxicities than the same trials with CRM. Conclusion: Incorporating PK values as covariate did not alter the efficiency of estimation of MTD when the prior was well specified. The next step will be to assess the impact on the estimation of the dose-concentration-toxicity curve for the different approaches and to explore the introduction of fully model-based PK/PD in dose allocation rules.
INI 1
15:30 to 16:00 Afternoon Tea
16:00 to 16:30 Model based adaptive optimal designs of adult to children bridging studies using FDA proposed stopping criteria
In this work we apply the FDA proposed precision criteria necessary for pediatric pharmacokinetic studies (Wang et. al., 2012) as a stopping criteria for a model based adaptive optimal design (MBAOD) of an adult to children pharmacokinetic bridging study. We demonstrate the power of the MBAOD compared to both traditional designs as well as non-adaptive optimal designs.
INI 1
16:30 to 17:00 Cost constrained optimal designs for regression models with random parameters
I describe various optimization problems related to the design of experiments for regression models with random parameters, aka mixed effect models and population models. In the terms of the latter two different goals can be pursuit: estimation of population parameters and individual parameters. Respectively we have to face two types of optimality criteria and cost constraints. Additional strata appear if one would observe that the following two experimental situations occur in practice: either repeated observations are admissible for a given experimental unit (object or subject), or not. Clinical studies with multiple sites with slightly different treatment outcomes (treatment-by-cite interaction) is an example when repeated and independent observation are possible - a few subjects can on each treatment arm. PK studies may serve as an example when repeated observations cannot be performed - only one observation at the given moment can be performed on a subject. All these caveats lead to the different design problems that I try to link together.
INI 1
17:00 to 17:30 Discussion INI 1
Wednesday 8th July 2015
09:00 to 09:40 Model-robust designs for quantile regression
We give methods for the construction of designs for regression models, when the purpose of the investigation is the estimation of the conditional quantile function and the estimation method is quantile regression. The designs are robust against misspecified response functions, and against unanticipated heteroscedasticity. Our methods, previously developed for approximate linear models, are modified so as to apply to non-linear responses. The results will be illustrated in a dose-response example.
INI 1
09:40 to 10:20 Locally optimal designs for errors-in-variables models
We consider the construction of locally optimal designs for nonlinear regression models when there are measurement errors in the predictors. Corresponding approximate design theory is developed for models subject to a functional error structure for both maximum likelihood and least squares estimation where the latter leads to non-concave optimisation problems. Locally D-optimal designs are found explicitly for the Michaelis-Menten, EMAX and exponential regression models and are then compared with the corresponding designs derived under the assumption of no measurement error in concrete applications.
INI 1
10:20 to 11:00 WK Wong (University of California, Los Angeles)
Nature-inspired meta-heuristic algorithms for generating optimal experimental designs
Nature-inspired meta-heuristic algorithms are increasingly studied and used in many disciplines to solve high-dimensional complex optimization problems in the real world. It appears relatively few of these algorithms are used in mainstream statistics even though they are simple to implement, very flexible and able to find an optimal or a nearly optimal solution quickly. Frequently, these methods do not require any assumption on the function to be optimized and the user only needs to input a few tuning parameters.

I will demonstrate the usefulness of some of these algorithms for finding different types of optimal designs for nonlinear models in dose response studies. Algorithms that I plan to discuss are more recent ones such as Cuckoo and Particle Swarm Optimization. I also compare their performances and advantages relative to deterministic state-of-the art algorithms.

INI 1
11:00 to 11:30 Morning Coffee
11:30 to 12:00 Construction of efficient experimental designs under resource constraints
We will introduce "resource constraints" as a general concept that covers many practical restrictions on experimental design, such as limits on budget, time, or available material. To compute optimal or near-optimal exact designs of experiments under multiple resource constraints, we will propose a tabu search heuristic related to the Detmax procedure. To illustrate the scope and performance of the proposed algorithm, we chose the problem of construction of optimal experimental designs for dose escalation studies.
INI 1
12:00 to 12:30 Designs for dose-escalation trials
For First-in-Human trials of a new drug, healthy volunteers are recruited in cohorts. For safety reasons, only the lowest dose and placebo may be used in the first cohort, and no new planned dose may be used until the one immediately below has been used in a previous cohort. How should doses be allocated to cohorts?
INI 1
12:30 to 13:30 Lunch at Wolfson Court
13:30 to 15:00 Free time
14:45 to 15:15 Afternoon Tea
15:00 to 17:00 Poster session
19:30 to 22:00 Conference Dinner at Cambridge Union Society hosted by Cambridge Dining Co.
Thursday 9th July 2015
09:30 to 10:00 Registration, Tea & Coffee
Session: Design of Experiments in Drug Development
09:30 to 17:00 Open for Business - Design of Experiments in Drug Development
Thursday will be devoted to an Open for Business Industry Day, organised by Vlad Dragalin, in partnership with the Turing Gateway to Mathematics. A key aim of this workshop will be to highlight state-of-the-art design of experiments methodologies and how these can impact the wider societal health objectives. For full details and information on the Programme, please see: http://www.turing-gateway.cam.ac.uk/ded_jul2015.shtml
Friday 10th July 2015
09:00 to 09:45 Efficient study designs
In the current economic climate there is a tightening of research income while the demand to answer important clinical questions it could be argued is the same or is even increasing. Efficient study designs enable the questions to be answered while making optimal use of finite resource.

An efficient design is defined as "statistically robust" designs that facilitate quicker and/or lower cost decisions when compared to conventional trial design. An efficient design is an umbrella term which encompasses aspects such adaptive designs and trials where routine health service data are used for the primary outcomes as well as "smarter" ways of doing trials. By being efficient in the study design resources and time can be saved in the assessment of new health technologies.

Through a series case studies the benefits of efficient study designs will be highlighted. For example: The PLEASANT trial used routinely collected data for data collection and as a result was approximately £1 million cheaper; retrospective analysis of RATPAC as a group sequential showed that if the trial had been adaptive it would have required a third of the patients and saved £250,000. An audit of public funded trials suggested that if futility were undertaken then 20% of trials would stop early and the proportion of successfully recruiting trials would increase from 45% to 64%. As well as highlighting the benefits the presentation will also discuss some of the challenges.

INI 1
09:45 to 10:30 Cross-sectional versus longitudinal design: does it matter?
Mixed models play an important role in describing observations in healthcare. To keep things simple we only consider linear mixed models, and we are only interested in the typical response, i.e. in the population location parameters. When discussing the corresponding design issues one is often faced with the widespread belief that standard designs which are optimal, when the random effects are neglected, retain their optimality in the mixed model. This seems to be obviously true, if there are only random block effects related to a unit specific level (e.g. random intercept). However, if there are also random effect sizes which vary substantially between units, then these standard designs may lose their optimality property. This phenomenon occurs in the situation of a cross-sectional design, where the experimental setting is fixed for each unit and varies between units, as well as in the situation of a longitudinal design, where the experimental setting varies within units and coincides across units. We will compare the resulting optimal solutions and check their relative efficiencies.
INI 1
10:30 to 11:00 Morning Coffee
11:00 to 11:45 Randomization for small clinical trials
Rare disease clinical trials present unique problems for the statistician designing the study. First, the trial may be small to reflect the uniquely small population of diseased in the population. Hence, the usual large sample beneficial properties of randomization (balancing on unknown covariates, distribution of standard tests, converging to a target allocation) may not apply. We describe the impact of such trials on consideration of randomization procedures, and discuss randomization as a basis for inference. We conclude that, in small trials, the randomization procedure chosen does matter, and randomization tests should be used as a matter of course due to its property of preserving the type I error rate under time trends.
INI 1
11:45 to 12:30 Assessment of randomization procedures based on single sequences under selection bias
Randomization is a key feature of randomized clinical trials aiming to protect against various types of bias. Different randomization procedures were introduced in the past decades and their analytical properties have been studied by various authors. Among others, balancing behaviour, protection against selection and chronological bias etc have been investigated. However, in summary no procedure performs best on all criteria. On the other hand, in the design phase of a clinical trial the scientist has to select a particular randomization procedure to be used in the allocation process which takes into account the research conditions of the trial. Up to now, less support is available to guide the scientist hereby, e.g. to weigh up the properties with respect to practical needs of the research question to be answered by the clinical trial. We propose a method to assess the impact of chronological and selection bias in a parallel group randomized clinical trial with continuous normal outcome on the probability of type one error to derive scientific arguments for selection of an appropriate randomization procedure.

This is joint work with Simon Langer.

INI 1
12:30 to 13:30 Lunch at Wolfson Court
13:30 to 14:15 H Dette (Ruhr-Universität Bochum)
Optimal designs for correlated observations
In this talk we will relate optimal design problems for regression models with correlated observations to estimation problems in stochastic differential equations. By using "classical" theory of mathematical statistics for stochastic processes we provide a complete solution of the optimal design problem for a broad class of correlation structures.
INI 1
14:15 to 15:00 Designs for dependent observations
We consider the problem of optimal experimental design for the regression models on an interval, where the observations are correlated and the errors come from either Markov or conditionally Markov process. We study transformations of these regression models and corresponding designs. We show, in particular, that in many cases we can assume that the underlying model of errors is the Brownian motion.

This is a joint work with H. Dette and A. Pepelyshev.

INI 1
15:00 to 15:30 Afternoon Tea
15:30 to 16:15 J Kunert (Universität Dortmund)
Optimal crossover designs with interactions between treatments and experimental units
The interactions are modelled as a random variable and therefore introduce a correlation between observations of the same treatment on the same experimental unit. Therefore, we have a dependence structure that depends on the design.
INI 1
16:15 to 17:00 J Stufken (Arizona State University)
On connections between orthogonal arrays and D-optimal designs for certain generalized linear models with group effects
We consider generalized linear models in which the linear predictor depends on a single quantitative variable and on multiple factorial effects. For each level combination of the factorial effects (or run, for short), a design specifies the number of values of the quantitative variable (or values, for short) that are to be used, and also specifies what those values are. Moreover, a design informs us for each combinations of runs and values what proportion of times that it should be used. Stufken and Yang (2012, Statistica Sinica) obtained complete class results for locally optimal designs for such models. The complete classes that they found consisted of designs with at most two values for each run. Many optimal designs found numerically in these complete classes turned out to have precisely two values for each run, resulting in designs with a large support size. Focusing on D-optimality, we show that under certain assumptions for the linear predictor, optimal designs with smaller support sizes can be found through the use of orthogonal arrays. This work is joined with Xijue Tan, and was part of his PhD dissertation at the University of Georgia.
INI 1
University of Cambridge Research Councils UK
    Clay Mathematics Institute London Mathematical Society NM Rothschild and Sons