skip to content
 

Timetable (DAEW03)

Design of Experiments in Healthcare

Monday 15th August 2011 to Friday 19th August 2011

Monday 15th August 2011
08:00 to 08:55 Registration
08:55 to 09:00 Welcome from John Toland (INI Director Designate) INI 1
09:00 to 09:45 V Fedorov ([GSK])
Design of clinical trials with multiple end points of different types
Several correlated endpoints are observed in almost any clinical trial. Typically one of them is claimed as a primary end point and the design (dose allocation and sample size) is driven by a single response model. I discuss the design problem with multiple end points which potentially may be of different nature. For instance, the efficacy end point may be continuous while the toxicity end point may be discrete. I emphasize the necessity to differentiate between the responses and utility functions. The response (end point) functions are what we observe while the utility functions are what should be reported or used in the decision making process. The discussed criteria of optimality are related to the latter and usually describe the precision of their estimators.
INI 1
09:45 to 10:30 A Grieve ([AptivSolutions])
The Role of Operating Characteristic in Assessing Bayesian Designs in Pharmaceutical Drug Development
The guidelines that are available covering the reporting of Bayesian clinical trials cover many important aspects including the choice of prior, issues surrounding computation such as the convergence of MCMC approaches and the appropriate statistics for summarising posterior distributions. Noteworthy is the total absence of any discussion of operating characteristics of Bayesian designs. This may be because these guidelines are largely written by academic and/or autonomous government groups and not by those involved in pharmaceutical drug development, for example sponsor associations or regulatory agencies. However operating characteristics are becoming of increasing importance in drug development as is witnessed by the EMA and FDA guidances on adaptive designs and the FDA guidance on Bayesian methodology in device trials. In this talk I investigate issues in determining operating characteristics of clinical trial designs with a particular emphasis on Bayesian designs but will also cover more general issues such as the design of simulation experiments and simulation evidence for strong control of type I error.
INI 1
10:30 to 11:00 Morning coffee
11:00 to 11:45 D Berry ([MD Anderson Cancer Center])
The Critical Path: "Biomarker Development and Streamlining Clinical Trials"
INI 1
11:45 to 12:30 A Atkinson
Experiments for Enzyme Kinetic Models
Enzymes are biological catalysts that act on substrates. The speed of reaction as a function of substrate concentration typically follows the nonlinear Michaelis-Menten model. The reactions can be modified by the presence of inhibitors, which can act by several different mechanisms, leading to a variety of models, all also nonlinear.

The talk will describe the models and derive optimum experimental designs for model building. When the model is known these include D-optimum designs for all the parameters for which we obtain analytical solutions. Ds-optimum designs for the inhibition constant are also of scientific importance.

When the model is not known, the choice is often between two three-parameter models. These can be combined in a single four-parameter model. Ds-optimum designs for the parameter of combination provide a means of establishing which model is true. However, T-optimum designs for departures from the individual models provide tests of maximum power for departures from the models. With two models on an equal footing, compound T-optimum designs are required. Their properties are compared with those of the Ds-optimum designs in the combined model, which have the advantage of being easier to compute.
INI 1
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:45 A Donner ([Western Ontario])
The Role of Cluster Randomization Trials in Health Research
Cluster randomization trials are those that randomize intact social units or clusters of individuals to different intervention groups. Such trials have been particularly widespread in the evaluation of educational programs and innovations in the provision of health care. This talk will deal with basic issues that must be considered when investigators first consider adopting a cluster randomization trial. Foremost among these is the need to justify the choice of this design given its statistical inefficiency relative to an individually randomized design. The role of matching and stratification in the design of a cluster trial ,and the reasons why many such trials are underpowered, will also be discussed.
INI 1
14:00 to 14:45 J Lee ([MD Anderson Cancer Center])
Biomarker-based Bayesian Adaptive Designs for Targeted Agent Development - Implementation and Lessons Learned from the BATTLE Trial
Advances in biomedicine have fueled the development of targeted agents in cancer therapy. Targeted therapies have shown to be more efficacious and less toxic than the conventional chemotherapies. Targeted therapies, however, do not work for all patients. One major challenge is to identify markers for predicting treatment efficacy. We have developed biomarker-based Bayesian adaptive designs to (1) identify prognostic and predictive markers for targeted agents, (2) test treatment efficacy, and (3) provide better treatments for patients enrolled in the trial. In contrast to the frequentist equal randomization designs, Bayesian adaptive randomization designs allow treating more patients with effective treatments, monitoring the trial more frequently to stop ineffective treatments early, and increasing efficiency while controlling type I and type II errors. Bayesian adaptive design can be more efficient, more ethical, and more flexible in the study conduct than standard design s. We have recently completed a biopsy-required, biomarker-driven lung cancer trial, BATTLE, for evaluating four targeted treatments. Lessons learned from the design, conduct, and analysis of this Bayesian adaptive design will be given.
INI 2
14:45 to 15:25 M Campbell ([Sheffield])
Cluster Randomised Trials: coping with selective recruitment, baseline covariates and anticipated drop-outs?
INI 1
14:45 to 15:25 K Wathen ([Johnson & Johnson])
ISPY-2: Adaptive Design to Identify Treatments for Biomarker
The ISPY2 process is a new approach to conducting clinical research that utilizes a patient’s biomarkers measurements to predict which treatment is most likely to provide benefit. Patients will be adaptively randomized and the treatment assignment probabilities will be altered to favor the treatment that, on average, appears superior for a given patient’s biomarker characteristics. In contrast to the traditional phase II clinical trial, which has a fixed number of treatments, the ISPY2 process will allow new agents to enter the trial as they become available and will "graduate" treatments based on the likelihood of future success in a subset of the patient population. A simulation study is presented and examples given to demonstrate the adaptive nature of the design.
INI 2
15:25 to 16:05 S Eldridge ([QMUL])
Sample size calculations for cluster randomised trials
In this talk I will address the major issues in calculating an adequate sample size for cluster randomised trials. It has long been recognised that in order for these trials to be adequately powered, between cluster variability must be accounted for in the sample size calculations. This is usually done by using an estimate of the intra-cluster correlation coefficient (ICC) in a design effect which is then used to adjust the sample size required for an individually randomised trial aiming to detect the same clinically important difference. More recently it has been recognised that variable cluster size should also be accounted for and a simple adjustment to the design effect provides a means to do this. Investigators still face three challenges, however: lack of information about variability in cluster size prior to the trial, lack of information about the value of the ICC prior to the trial, the adjustment for variable cluster size does not strictly match all methods of analysis. I will illustrate these challenges with some examples and outline approaches that have been and could be adopted to address them.
INI 1
15:25 to 16:05 T Braun ([Michigan])
Bayesian Adaptive Designs for Identifying Maximum Tolerated Combinations of Two Agents
Phase I trials of combination cancer therapies have been published for a variety of cancer types. Unfortunately, a majority of these trials suffer from poor study designs that either escalate doses of only one of the agents and/or use an algorithmic approach to determine which combinations of the two agents maintain a desired rate of dose-limiting toxicities (DLTs), which we refer to as maximum tolerated combinations (MTCs). We present a survey of recent approaches we have developed for the design of Phase I trials seeking to determine the MTC. For each approach, we present a model for the probability of DLT as a function of the doses of both agents. We use Bayesian methods to adaptively estimate the parameters of the model as each patient completes their follow-up in the trial, from which we determine the doses to assign to the next patient enrolled in the trial. We describe methods for generating prior distributions for the parameters in our model from a basic set of i nformation elicited from clinical investigators. We compare and contrast the performance of each approach in a series of simulations of a hypothetical trial that examines combinations of four doses of two agents and compare the results to those of an algorithmic design known as an A+B+C design.
INI 2
16:05 to 16:35 Afternoon tea in main building
16:35 to 17:15 I White ([MRC])
A cluster-randomised cross-over trial
I will describe a trial which combined a cluster-randomised design with a cross-over design. The Preterm Infant Parenting (PIP) trial evaluated a nurse-led training intervention delivered to parents of prematurely born babies to help them meet their babies' needs. An individually randomised trial risked extensive "contamination" of parents in the control arm with knowledge of the intervention, so the investigators instead randomised neonatal units. However, neonatal units differ widely, and only 6 neonatal units were available, so a conventional cluster randomised design would have been underpowered. In the selected design, the six neonatal units were randomly allocated to deliver intervention or control to families recruited during a first 6-month period; after a 2-month interval, each unit then delivered the opposite condition to families recruited during a second 6-month period.

I will present the relative precisions of individually randomised, cluster-randomised and cluster-crossover designs, and design issues including the need for a wash-out period to minimise carry-over. The analysis can be conveniently done using cluster-level summaries. I will end by discussing whether cluster-crossover designs should be more widely used.
INI 1
16:35 to 17:15 B Bogachka ([QMUL])
Dose Selection Incorporating PK/PD Information in Early Phase Clinical Trials.
Early phase clinical trials generate information on pharmacokinetic parameters and on safety issues. In addition, a dose level, or a set of dose levels, needs to be selected for further examination in later phases. If patients, rather than healthy volunteers, take part in the early phase, it may be possible to observe the effects of the drug on the disease. In the presentation we will discuss some statistical, ethical and economic aspects of designing optimum adaptive clinical trials for dose selection incorporating both pharmacokinetic and pharmacodynamic endpoints.
INI 2
17:15 to 17:55 C Weijer ([Western Ontario])
Ethical issues posed by cluster randomized trials in health research
The cluster randomized trial (CRT) is used increasingly in knowledge translation research, quality improvement research, community based intervention studies, public health research, and research in developing countries. While there is a small but growing literature on the subject, ethical issues raised by CRTs require further analysis. CRTs only partly fit within the current paradigm of research ethics. They pose difficult ethical issues for two basic reasons related to their design. First, CRTs involve the randomization of groups rather than individuals, and our understanding of the moral status of groups in incomplete. As a result, the answers to pivotal ethical questions, such as who may speak in behalf of a particular group and on what authority they may do so, are unclear. Second, in CRTs the units of randomization, experimentation, and observation may differ, meaning, for instance, that the group that receives the experimental intervention may not be the same as the group from which data are collected. The implications for the ethics of trials of experimental interventions with (solely) indirect effects on patients and others is not currently well understood. Here I lay out some basic considerations on who is a research subject, from whom one must obtain informed consent, and the use of gatekeepers in CRTs in health research (Trials 2011; 12(1): 100).
INI 1
17:15 to 17:55 K Cheung ([Columbia])
Objective Calibration of the Bayesian Continual Reassessment Method
The continual reassessment method (CRM) is a Bayesian model-based design for percentile estimation in sequential dose finding trials. The main idea of the CRM is to treat the next incoming patient (or group of patients) at a recent posterior update of the target percentile. This approach is intuitive and ethically appealing on a conceptual level. However, the performance of the CRM can be sensitive to how the CRM model is specified. In addition, since the specified model directly affect the generation of the design points in the trial, sensitivity analysis may not be feasible after the data are collected.

As there are infinitely many ways to specify a CRM model, the process of model calibration, typically done by trial and error in practice, can be complicated and time-consuming. In my talk, I will first review the system of model parameters in the CRM, and then describe some semi-automated algorithms to specify these parameters based on existing dose finding theory. Simulation results will be given to illustrate this semi-automated calibration process in the context of some real trial examples.
INI 2
18:15 to 19:00 Dinner at Murray Edwards College (residents only)
19:00 to 19:45 Welcome wine reception and posters
Tuesday 16th August 2011
09:30 to 10:00 V Dragalin ([Aptiv Solutions, USA])
Adaptive Dose-Ranging Designs with Two Efficacy Endpoints
Following the introduction of the continual reassessment method by O’Quigley, Pepe and Fisher, there has been considerable interest in formal statistical procedures for phase I dose-finding studies. The great majority of published accounts relate to cancer patients treated once with a single dose of the test drug who return a single binary observation concerning the incidence of toxicity. However, most phase I dose-finding studies are not of such a simple form. Drugs being developed for milder conditions than cancer are usually first tested in healthy volunteers who participate in multiple dosing periods, returning a continuous pharmacokinetic response each time.

This talk will describe Bayesian decision procedures which have been developed for such dose-finding studies in healthy volunteers. The principles behind the approach will be described and an evaluation of its properties presented. An account will be given of an implementation of the approach in a study conducted in Scandinavia. Generalisation to studies in which more than one response is used will also be discussed.
INI 1
10:00 to 10:30 L Pronzato ([CNRS])
Penalized optimal design for dose finding
We consider optimal design under a cost constraint, where a scalar coefficient L sets the compromise between information and cost. For suitable cost functions, one can force the support points of an optimal design measure to concentrate around points of minimum cost by increasing the value of L, which can be considered as a tuning parameter that specifies the importance given to the cost constraint.

An example of adaptive design in a dose-finding problem with a bivariate binary model will be presented. As usual in nonlinear situations, the optimal design for any arbitrary choice of L depends on the unknown value of the model parameters. The construction of this optimal design can be made adaptive, by using a steepest-ascent algorithm where the current estimated value of the parameters (by Maximum Likelihood) is substituted for their unknown value. Then, taking benefit of the fact that the design space (the set of available doses) is finite, one can prove the strong consistency and asymptotic normality of the ML estimator when L is kept constant. Since the cost is reduced when L is increased, it is tempting to let L increase with the number of observations (patients enroled in the trial). The strong consistency of the ML estimator is then preserved when L increases slowly enough.
INI 1
10:30 to 11:00 C Jennison ([Bath])
Jointly optimal design of Phase II and Phase III clinical trials: an over-arching approach
We consider the joint design of Phase II and Phase III trials. We propose a decision theoretic formulation with a gain function arising from a positive Phase III outcome and costs for sampling and for time taken to reach a positive conclusion. With a prior for the dose response model and a risk curve for the probability that doses fail on safety grounds, the challenge is to optimise the design for comparing doses in Phase II, the choice of dose or doses to take forward to Phase III, and the Phase III design. We shall show it is computationally feasible to tackle this problem and discuss possible generalisations from an initial, simple formulation.
INI 1
11:00 to 11:30 Morning coffee
11:00 to 12:30 DAE informal discussion INI 2
11:30 to 12:00 S-J Wang ([FDA])
Utility and pitfals of dose ranging trials with multiple study objectives: fixed or adaptive
Multiple study objectives have been proposed in dose-ranging studies. Traditionally, a dose-response study is pursued as a fixed design of equal randomization ratio to each study arm and with only single study objective of detecting dose-response (DR). The PhRMA adaptive dose ranging working group has taken the ownership of the problem using an adaptive design or an adaptive analysis approach acknowledging the trial exploratory. The authors have critically pursued multiple study objectives via simulation studies (JBS 2007, SBR 2010) and concluded that achieving the first goal of detecting DR is much easier than achieving the fourth goal of estimating it, or the second and third goals of identifying the target dose to bring into the confirmatory phase. It is tempting to consider dose-ranging, dose-response and sometimes exposure-response studies as pivotal evidence, especially when they are designed as a two-stage adaptive trial. Design according to the study objective is vital to the success of the study. In this presentation, the utility and pitfalls of a two-stage adaptive dose-ranging trial will be elucidated. Challenges and reflection on some of the successful and not so successful regulatory examples will be highlighted. The appropriate stages distinguishing between learning stage versus confirmatory stage in a drug development program will also be discussed using some typical studies.
INI 1
12:00 to 12:30 J Pinheiro ([Johnson & Johnson])
Improving dose-finding methods in clinical development: design, adaptation, and modeling
The pharmaceutical industry experiences increasingly challenging conditions, with a combination of escalating development costs, tougher regulatory environment, expiring patents on important drugs, and fewer promising drugs in late stage of development. Part of this pipeline problem is attributed to poor dose selection for confirmatory trials leading to high attrition rates (estimated at 50%) for Phase 3 programs. Improving the efficiency of drug development, in general, and of dose-finding studies in particular, is critical for the survival of the industry. A variety of methods have been proposed to improve dose selection and, more broadly, understanding of the dose-response relationship for a compound. Among them: adaptive designs, modeling and simulation approaches, optimal designs, and clinical utility indices. In this talk we’ll discuss and illustrate the utilization of some of those approaches in the context of dose-finding trials. The results of a comprehensive se t of simulation studies conducted by the PhRMA working group on Adaptive Dose-Ranging Studies will be used to discuss the relative merits of the various approaches and to motivate recommendations on their use in practice.
INI 1
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:30 F Bretz ([Novartis])
Response-adaptive dose-finding under model uncertainty
In pharmaceutical drug development, dose-finding studies are of critical importance because both safety and clinically relevant efficacy have to be demonstrated for a specific dose of a new compound before market authorization. Motivated by a real dose-finding study, we propose response-adaptive designs addressing two major challenges in dose-finding studies: uncertainty about the dose-response models and large variability in parameter estimates. To allocate new cohorts of patients in an ongoing study, we use optimal designs that are robust under model uncertainty. In addition, we use a Bayesian shrinkage approach to stabilize the parameter estimates over the successive interim analyses used in the adaptations. This approach allows us to calculate updated parameter estimates and model probabilities that can then be used to calculate the optimal design for subsequent cohorts. The resulting designs are hence robust with respect to model misspecification and additionally can ef ficiently adapt to the information accrued in an ongoing study. We focus on adaptive designs for estimating the minimum effective dose, although alternative optimality criteria or mixtures thereof could be used, enabling the design to address multiple objectives. In an extensive simulation study, we investigate the operating characteristics of the proposed method under a variety of scenarios.
INI 1
14:30 to 15:00 H Thygesen ([Lancaster])
Dose Escalation using a Bayesian Model: rational decision rules.
In dose escalation studies, the potential ethical costs of administering high doses must be weighted against the added utility from gaining safety information about high (and potentially effective doses). This is the rationale for starting with low doses while the confidence in the safety of the drug is low, and escalating to higher doses as confidence grow. A decision theoretical framework is proposed.
INI 1
15:00 to 15:30 B Neuenschwander ([Novartis Pharma AG])
Bayesian approaches to Phase I clinical trials: methodological and practical aspects
Statistics plays an important role in drug development, in particular in confirmatory (phase III) clinical trials, where statistically convincing evidence is a requirement for the registration of a drug. However, statistical contributions to phase I clinical trials are typically sparse. A notable exception is oncology, where statistical methods abound. After a short review of the main approaches to phase I cancer trials, we discuss a fully adaptive model-based Bayesian approach which strikes a reasonable balance with regard to various objectives. First, proper quantification of the risk of dose-limiting toxicities (DLT) is the key to acceptable dosing recommendations during the trial, and the declaration of the maximum tolerable dose (MTD), a dose with an acceptable risk of DLT, at the end of the trial. In other words, statistically driven dosing-recommendations should be clinically meaningful. Second, the operating characteristics of the design should be acceptable. That is, the probability to find the correct MTD should be reasonably high. Third, not too many patients should be exposed to overly toxic doses. And fourth, the approach should allow for the inclusion of relevant study-external information, such as pre-clinical data or data from other human studies. The methodological and practical aspects of the Bayesian approach to dose finding trials in Oncology phase I will be discussed, and examples from actual trials will be used to illustrate and highlight important issues. The presentation concludes with a discussion of the main challenges for a large-scale implementation of innovative clinical trial designs in the pharmaceutical industry.
INI 1
15:30 to 16:00 S Leonov (GlaxoSmithKline)
Application of model-based designs in drug development
We discuss the use of optimal model-based designs at different stages of drug development. Special attention is given to adaptive model-based designs in dose finding studies and designs for nonlinear mixed models which arise in population pharmacokinetic/pharmacodynamic studies. Examples of software tools and their application are provided
INI 1
16:00 to 16:45 Afternoon tea
16:45 to 18:00 M Krams ([Johnson & Johnson])
Design of Experiments in Healthcare, dose-ranging studies, astrophysics and other dangerous things
Panel discussion of the day's topics including:
  • - Clinical objectives at different stages of drug development
  • - Their formulation in terms of design of experiment objectives
  • - Optimal design for each objective
  • - Challenges in their implementation
INI 1
18:15 to 19:00 Dinner at Murray Edwards College (residents only)
Wednesday 17th August 2011
09:00 to 09:45 J Louviere ([U of Technology, Sydney])
A Brief History of DCEs and Several Important Challenge

A confrontation with reality led to integration of conjoint measurement, discrete multivariate analysis of contingency tables, random utility theory and discrete choice models and design of statistical experiments. Few seem to realise that discrete choice experiments (DCEs) are in fact sparse, incomplete contingency tables. Thus, much of that literature informs and assists design and analysis of DCEs, such that often complex statistical models are largely unnecessary. Many lack this perspective, and hence much of the literature is dominated by model-driven views of the design and analysis of DCEs.

The transition from the first DCEs to the present was very incremental and haphazard, with many advances being driven by market confrontations. For example "availability" designs arose from being asked to solve problems with out-of-stock conditions, infrastructure interruptions (eg, road or bridge closures), etc. Progress became more rapid and systematic from the late 1990s onwards, particularly with researchers skilled in optimal design theory getting involved in the field. Thus, there have been major strides in the optimal design of DCEs, but there now seems to be growing awareness that experiments on humans pose interesting issues for "optimal" design, particularly designs that seek to optimise statistical efficiency.

Along the way we stumbled onto individuals, error variance differences, cognitive process differences and we're still stumbling.

This talk is about a journey that starts in 1927 with paired comparisons, travels along an ad hoc path until it runs into an airline in 1978, emerges five years later as a systematic way to design and implement multiple comparisons, and slowly wanders back and forth until it begins to pick up speed and follow a "more optimal" path. Where is it going? Well, one researcher's optimum, may well be one human's suboptimum. Where should it be going? The road ahead is littered with overconfidence and assumptions. A better path is to invest in insurance against ignorance and assumptions.

INI 1
09:45 to 10:30 M Ryan ([Aberdeen])
Discrete Choice Experiments in Health Economics
Since their introduction in health economics in the early 1990s, there has been an increased interest in the use of discrete choice experiments (DCEs), both at the applied and methodological level. At the applied level, whilst the technique was introduced into health economics to go beyond narrow definitions of health benefits (Quality Adjusted Life Years, QALYs), and value broader measures of utility (patient experiences/well being), the technique is being applied to address an ever increasing range of policy questions. Methodologically developments have also been made with respect to methods for developing attributes and levels, techniques for defining choice sets to present to individuals (experimental design) and methods for analysis of response data. This talk considers the journey of DCEs in health economics, discussing both where we are, and where we should go.
INI 1
10:30 to 11:00 Morning coffee
11:00 to 11:45 J Rose ([U of Technology, Sydney])
Sample size, statistical power and discrete choice experiments: How much is enough
Discrete choice experiments (DCE) represent an important method for capturing data on the preferences held by both patients and health care practitioners for various health care policies and/or products. Identifying methods for reducing the number of respondents required for SC experiments is important for many studies given increases in survey costs. Such reductions, however, must not come at the cost of a lessening in the reliability of the parameter estimates obtained from models of discrete choice.

The usual method of reducing the number of sampled respondents in DCE experiments conducted in health studies appears to be using orthogonal fractional factorial experimental designs with respondents assigned to choice situations via either a blocking variable or via random assignment. Through the use of larger block sizes (i.e., each block has a larger number of choice situations) or by the use of a greater number of choice situations being randomly assigned per respondent, analysts may decrease the number of respondents whilst retaining a fixed number of choice observations collected. It should be noted, however, that whilst such strategies reduce the number of respondents required for DCE experiments, they also reduce the variability observed in other covariates collected over the sample.

Yet despite practical reasons to reduce survey costs, particularly through reductions in the sample sizes employed in DCE studies, questions persist as to the minimum number of choice observations, both in terms of the number respondents as well as the number of questions asked of each respondent, that are required to obtain reliable parameter estimates for discrete choice models estimated from DCE data. In this talk, we address both issues in the context of the main methods of generating experimental designs for DCEs in health care studies. We demonstrate a method for calculating the minimum sample size required for a DCE that does not require rules of thumb.
INI 1
11:00 to 11:45 C Taylor ([Birmingham])
Systematic review of the use of stepped wedge cluster randomized trials
Background In a stepped wedge cluster randomized controlled trial, clusters are randomly allocated to the order in which they will receive the intervention, with one cluster receiving the intervention at the beginning of each study period (step). Therefore by the end of the recruitment period all clusters have received the intervention, but the number of periods in the ‘control’ and ‘intervention’ sections of the wedge will vary across clusters.

Objective To describe the application of the stepped wedge cluster randomized controlled trial design using a systematic review.

Study Design and Setting We searched MEDLINE, EMBASE, PSYCINFO, HMIC, CINAHL, Cochrane Library, Web of Knowledge and Current Controlled Trials Register for articles published up to January 2010. Stepped wedge cluster randomized controlled trials from all fields of research were included. Two authors independently reviewed and extracted data from the studies.

Results Twenty five studies were included in the review. Motivations for using the design included ethical, logistical, financial, social and political acceptability and methodological reasons. Most studies were evaluating an intervention during routine implementation. For most of the included studies there was also a belief or empirical evidence suggesting that the intervention would do more good than harm. There was variation in data analysis methods and insufficient quality of reporting.

Conclusions The stepped wedge cluster randomized controlled trial design has been mainly used for evaluating interventions during routine implementation, particularly for interventions that have been shown to be effective in more controlled research settings, or where there is lack of evidence of effectiveness but there is a strong belief that the intervention will do more good than harm. There is need for consistent data analysis and reporting.
INI 2
11:45 to 12:30 L Moulton ([Johns Hopkins])
Challenges in the Design and Analysis of a Randomized, Phased Implementation (Stepped-Wedge) Study in Brazil
The cluster randomized one-way crossover design, known as a stepped-wedge design, is becoming increasingly popular, especially for health studies in less industrialized countries. This design, however, presents numerous challenges, both for design and analysis.

Two issues regarding the design of a stepped-wedge study will be highlighted: randomization and power. Specifically, first, there is the question of how best to constrain the randomization so that it is balanced over time with respect to covariates-a highly constrained but ad hoc procedure will be presented. Second, the various pieces of information necessary for a full power calculation will be delineated.

As with cluster-randomized designs in general, close attention must be given to study hypotheses of interest, and the relation of these to the two levels of intervention-cluster and individual. A study of isoniazid prophylaxis implementation in 29 clinics in Rio de Janeiro is used to exemplify the range of questions that can arise. A few analyses of the data are also presented, so as to illustrate the degree to which data analytic choices to address these questions can vary the results, and to show the longitudinal complexities that need be considered.
INI 2
11:45 to 12:30 P Goos ([Antwerpen])
Optimal designs for discrete choice experiments in the presence of many attributes
In a discrete choice experiment each respondent typically chooses the best product or service sequentially from many groups or choice sets of alternatives which are characterized by a number of different attributes. Respondents can find it difficult to trade off prospective products or services when every attribute of the offering changes in each comparison. Especially in studies involving many attributes, respondents get overloaded by the complexity of the choice task. To overcome respondent fatigue, it makes sense to simplify the comparison by holding some of the attributes constant in every choice set. A study in the health care literature where eleven attributes were allocated across three different experimental designs with only five attributes being varied motivates the approach we present. However, our algorithm is more general, allowing for any number of attributes and a smaller number of fixed attributes. We describe our algorithmic approach and show how the resulting design performed in our motivating example.
INI 1
12:30 to 13:00 H Grossmann ([QMUL])
Partial profile paired comparison designs for avoiding information overload
The inclusion of many attributes makes a choice experiment more realistic. The price to be paid for this increased face validity is however that the respondents' task becomes cognitively more demanding. In order to avoid negative side effects, such as fatigue or information overload, a common strategy is to employ partial profiles, which are incomplete descriptions of the available alternatives. This talk presents efficient designs for the situation where each choice set is a pair of partial profiles and where only the main effects of the attributes are to be estimated.
INI 1
12:30 to 13:00 SG Thompson ([Cambridge])
Stepped wedge randomised trials
INI 2
13:00 to 14:00 Lunch at Wolfson Court
14:00 to 17:00 Excursion: walk to Grantchester Orchard Tea Garden
Link to map: http://maps.google.com/maps/ms?ie=UTF8&oe=UTF8&msa=0&msid=106746755186829767761.000484fac16459f50dc7c Link to Grantchester Orchard Tea Garden: http//www.orchard-grantchester.com
19:30 to 22:30 Hog Roast Garden Party at The Moller Centre
Thursday 18th August 2011
09:00 to 09:45 E Lancsar ([Monash])
Discrete choice experimental design for alternative specific choice models: an application exploring preferences for drinking water
Health economic applications of discrete choice experiments have generally used generic forced choice experimental designs, or to a lesser extent generic designs with an appended status quo or opt out option. Each has implications for the types of indirect utility functions that can be estimated from such designs. Less attention has been paid to allowing for alternative specific choice experiments. This paper focuses on the development and use of an experimental design that allows for both labelled alternatives and alternative specific attribute effects in the context of a best worst choice study designed to investigate preferences for different types of drinking water. Results including testing for alternative specific effects and preferences for different types of drinking water options are presented, with implications explored.
INI 1
09:45 to 10:30 R Kessels (Universiteit Antwerpen)
The usefulness of Bayesian optimal designs for discrete choice experiments

Recently, the use of Bayesian optimal designs for discrete choice experiments has gained a lot of attention, stimulating the development of Bayesian choice design algorithms. Characteristic for the Bayesian design strategy is that it incorporates the available information about people's preferences for various product attributes in the choice design. In a first part of this talk, we show how this information can be best incorporated in the design using an experiment from health care in which preferences are measured for changes in eleven health system performance domains.

The Bayesian design methodology is in contrast with the linear design methodology which is also used in discrete choice design, and which depends for any claims of optimality on the unrealistic assumption that people have no preference for any of the attribute levels. Nevertheless, linear design principles have often been used to construct discrete choice experiments. In a second part, we show using a simulation study that the resulting utility-neutral optimal designs are not competitive with Bayesian optimal designs for estimation purposes.

INI 1
10:30 to 11:00 Morning coffee
11:45 to 12:30 P van de Ven ([Vrije U, Amsterdam])
An efficient alternative to the complete matched-pairs design for assessing non-inferiority of a new diagnostic test
Studies for assessing non-inferiority of a new diagnostic test relative to a standard test typically use a complete matched-pairs design in which results for both tests are obtained for all subjects. We present alternative non-inferiority tests for the situation where results for the standard test are obtained for all subjects but results for the new test are obtained for a subset of those subjects only. This situation is common when results for the standard test are available from a monitoring or screening programme or from a large biobank. A stratified sampling procedure is presented for drawing the subsample of subjects that receive the new diagnostic test with strata defined by the two outcome categories of the standard test. Appropriate statistical tests for non-inferiority of the new diagnostic test are derived. We show that if diagnostic test positivity is low, the number of subjects to be tested with the new test is minimized when stratification is non-proportional.
INI 1
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:45 Y Ji ([.D. Anderson Cancer Center])
From Bench to Bedside: The Application of Differential Protein Networks on Bayesian Adaptive Designs for Trials with Targetted Therapies
INI 1
14:45 to 15:30 K Kim ([Wisconsin])
A Bayesian Adaptive Design with Biomarkers for Targeted Therapies and Some Commentary on Adaptive Designs
Pharmacogenomic biomarkers are considered an important component of targeted therapies as they can potentially be used to identify patients who are more likely to benefit from them. New study designs may be helpful which can evaluate both the prognosis based on the biomarkers and the response from targeted therapies. In this talk I will present a recently developed Bayesian response-adaptive design. The design utilizes individual pharmacogenomic profiles and clinical outcomes as they become available during the course of the trial to assign most effective treatment to patients. I will present simulation studies of the proposed design. In closing I will share my perspectives on adaptive designs in general.
INI 1
15:30 to 16:00 Afternoon tea
16:00 to 16:45 P Thall ([M.D. Anderson Cancer Center])
Optimizing the Concentration and Bolus of a Drug Delivered by Continuous Infusion
We consider treatment regimes in which an agent is administered continuously at a specified concentration until either a therapeutic response is achieved or a predetermined maximum infusion time is reached. Additionally, a portion of the planned maximum total amount of the agent is administered as an initial bolus. Efficacy is the time to response, and toxicity is a binary indicator of an adverse event that may occur after infusion. The amount of the agent received by the patient thus depends on the time to response, which in turn affect the probability of toxicity. An additional complication arises if response is evaluated periodically, since the response time is interval censored. We address the problem of designing a clinical trial in which such response time data and toxicity are used to jointly optimize the concentration and size of the initial bolus. We propose a sequentially adaptive Bayesian design that chooses the optimal treatment for each patient by maximizing the posterior mean utility of the joint efficacy-toxicity outcome. The methodology is illustrated by a clinical trial of tissue plasminogen activator (tPA) infused intra-arterially as rapid treatment for acute ischemic stroke. The fundamental problem is that too little tPA may not dissolve the clot that caused the stroke, but too much may cause a symptomatic intra-cranial hemorrhage, which often is fatal. A computer simulation study of the design in the context of the tPA trial is presented.
INI 1
16:45 to 17:30 B Mukherjee ([Michigan])
Discussion of three talks on (covariate) adaptive designs
INI 1
18:15 to 19:00 Dinner at Murray Edwards College (residents only)
Friday 19th August 2011
09:45 to 10:30 B Bornkamp ([Novartis])
Functional uniform prior distributions for nonlinear regression
In this talk I will consider the topic of finding prior distributions in nonlinear modelling situations, that is, when a major component of the statistical model depends on a non-linear function. Making use of a functional change of variables theorem, one can derive a distribution that is uniform in the space of functional shapes of the underlying nonlinear function and then back-transform to obtain a prior distribution for the original model parameters. The primary application considered in this article is non-linear regression in the context of clinical dose-finding trials. Here the so constructed priors have the advantage that they are parametrization invariant as opposed to uniform priors on parameter scale and can be calculated prior to data collection as opposed to the Jeffrey’s prior. I will investigate the priors for a real data example and for calculation of Bayesian optimal designs, which require the prior distribution to be available before data collection has started (so that classical objective priors such as Jeffreys priors cannot be used).
INI 1
10:30 to 11:00 Morning coffee
11:00 to 11:45 M Savelieva Praz ([Novartis])
PKPD modelling to optimize dose-escalation trials in Oncology
The purpose of dose-escalation trials in Oncology is to determine the highest dose that would provide the desirable treatment effect without unacceptable toxicity, a so-called Maximum Tolerated Dose (MTD). Neuenschwander et al. [1] introduced a Bayesian model-based approach that provides realistic inferential statements about the probabilities of a Dose-Limiting Toxicity (DLT) at each dose level. After each patient cohort, information is derived from the posterior distribution of the model parameters. This model output helps the clinical team to define the dose for the next patient cohort. The approach not only allows for more efficient patient allocation, but also for inclusion of prior information regarding the shape of the dose-toxicity curve. However, in its’ simplest form, the method relies on an assumption that toxicity events are driven solely by the dose, and that the patients' population is homogeneous w.r.t. the response. This is rarely the case, in particular in a very heterogeneous cancer patients' population. Stratification of the response by covariates, such as disease, disease status, baseline characteristics, etc., could potentially reduce the variability and allow to identify subpopulations that are more or less prone to experience an event. This stratification requires enough data been available, that is rarely the case when toxicity events are used as a response variable. We propose to use a PKPD approach to model the mechanistic process underlying the toxicity. In such a way, all the data, also including those from patients that have not (yet) experienced a toxicity event, are taken into account. Furthermore, various covariates can be introduced into the model, and predictions for patients' subgroups of interest could be done. Thus, we will aim to reduce the number of patients exposed to low and inefficient doses, the number of cohorts and the total number of patients required to define MTD. Finally we hope to reach MTD faster at a lower cost. We test the methodology on a concrete example and discuss the benefits and drawbacks of the approach. References [1] Neuenschwander B., Branson M., Gsponer T. Critical aspects of the Bayesian approach to Phase I cancer trials, Statistics in Medicine 2008, 27:2420-2439 [2] Piantadosi S. and Liu G, Improved Designs for Dose Escalation Studies Using Pharmacokinetic measurements, Statistics in Medicine 1996, 15, 1605-1618 [3] Müller, P. and Quintana, F. A. (2010) Random Partition Models with Regression on Covariates. Journal of Statistical Planning and Inference, 140(10), 2801-2808 [4] Berry S., Carlin B., Lee J. and Müller P. Bayesian Adaptive Methods for Clinical Trials, CRC Press, 2010
INI 1
11:45 to 12:30 J Taylor ([Michigan])
Designs and models for Phase I oncology trials with intra-patient dose escalation
INI 1
12:30 to 13:30 Lunch at Wolfson Court
14:00 to 14:45 N Flournoy ([Missouri])
Some Issues Response-Adaptive Designs for Dose-finding Experiments
We discuss some of the many issues involved in selecting an adaptive design. Included in these considerations are choices to go with frequentist or Bayesian, parametric or nonparametric procedures. There is great appeal in using all the information gained to date, but in many settings, two or three stage designs have been shown to perform almost as well as fully adaptive ones. Furthermore, with many procedures, an unfortunate string of early responses can have strong undesirable effects on estimates. These consequences can be mitigated by using a short term memory procedure rather than a long term memory procedure. When interest is in the MTD, placing subjects around the MTD is symbiotic with efficiently estimating the MTD; this is not so when interest is in finding a dose that is efficacious without toxicity. The compromise between designing to optimize for ethical treatment versus optimizing for efficiency in estimating best dose should be given serious attention in practice; although it has been stated that adaptive designs let one do both, this simply is not the case. Finally, we briefly consider issues related to stopping for toxicity and lack of efficacy, sample size recalculation and dropping or adding treatments. How flexible should a clinical trial be? Are analysts prepared for the negative impact such flexibility has on estimates of effect size?
INI 1
14:45 to 15:30 B Rosenberger ([George Mason])
Principles for Response-Adaptive Randomization
We discuss guiding principles for the use of response-adaptive randomization in clinical trials. First, we describe a set of criteria by which the investigator can determine whether response-adaptive randomization is useful. The we discuss a template for the appropriate selection of a response-adaptive randomization procedure. Such guidance should be useful in designing state-of-the-art clinical trials.
INI 1
15:30 to 16:15 A Giovagnoli ([Bologna])
Recent developments in adaptive clinical trials to account for individual and collective ethics
Most Phase III clinical trials are carried out in order to compare different drugs or therapies. The aim may be to estimate some treatment effects separately or, more commonly, to estimate or test their differences. The ethical concern of assigning treatments to patients so as to care for each of them individually often conflicts with the demands for rigorous experimentation on one hand, and randomization on the other. Recently, there has been a growing statistical interest in sequential procedures for treatment comparison which at each stage use the available information with the ethical aim of skewing allocations towards the best treatment. In two recent papers ([1], [2]) the present authors have approached the problem via the optimization of a compromise criterion, obtained by taking a weighted average of a design optimality measure and a measure of the subjects' risk. The relative weights in the compound criterion have been allowed to depend on the true state of nature, since it is reasonable to suppose that the more the effects of the treatments differ, the more important for the patients are the chances of receiving the best treatment.

The purpose of this presentation is to extend the theoretical results of [1] and [2] and enhance their applicability by means of some numerical examples. We shall first of all find a "target" allocation, namely one that optimizes the above-mentioned compound criterion for different response models, also taking into account observable categorical covariates. Since the target does in general depend on the unknown parameters, the implementation of adaptive randomization methods to make the experiment converge to the desired target is illustrated. For simplicity here we consider the most common case of just two treatments.

References

  1. 1. A. Baldi Antognini, A. Giovagnoli (2010) "Compound Optimal Allocation for Individual and Collective Ethics in Binary Clinical Trials." Biometrika 97(4), 935 - 946
  2. 2. A. Baldi Antognini, M. Zagoraiou (2010) "Covariate adjusted designs for combining efficiency, ethics and randomness in normal response trials", in mODa 9 - Advances in Model Oriented Design and Analysis (A. Giovagnoli, A. Atkinson, B. Torsney eds, C. May coed.), HEIDELBERG: Physica-Verlag, Springer (GERMANY), 17-24, ISBN: 978-3-7908-2409-4
INI 1
16:15 to 16:20 Closing remarks INI 1
18:15 to 19:00 Dinner at Murray Edwards College (residents only)
University of Cambridge Research Councils UK
    Clay Mathematics Institute The Leverhulme Trust London Mathematical Society Microsoft Research NM Rothschild and Sons