Skip to content

Workshop Programme

for period 21 - 23 June 2010

Simulation of Stochastic Networks (including RESIM 2010)

21 - 23 June 2010

Timetable

Monday 21 June
08:30-09:00 Registration
09:00-09:10 Welcome from Sir David Wallace (INI Director)
Session: RESIM Day 1
09:10-10:00 Blanchet, J (Columbia)
  Rare event simulation for stochastics networks Sem 1
 

We shall discuss several importance sampling techniques for efficient estimation of overfow probabilities or sampling conditional paths in stochastic networks. We are interested in a wide class of environments including light-tailed and heavy-tailed networks. Our discussion includes algorithms that are optimal in the sense of achieving the best possible rate of convergence as the associated large deviations parameter increases. For instance, in the case of arbitrary Jackson networks our estimators are strongly efficient and we can generate exact conditional overflow paths in linear time (as a function of the overflow level). This talk is based on joint work with Peter Glynn, Henry Lam, Kevin Leder, Chenxin Li and Jingchen Liu.

 
10:00-10:25 Collamore, JF; Vidyashankar, AN (Copenhagen; George Mason)
  Rare event simulation for the ruin problem with investments via importance sampling and duality Sem 1
 

We develop an efficient importance sampling algorithm for a Cramér-Lundberg insurance model with stochastic investments. Our approach utilizes the duality between this process and an extended GARCH(1,1) financial process. Using this duality and the regenerative structure of the GARCH(1,1) financial process, we introduce a novel large deviation change of measure idea to compute the probability of ruin for the insurance process. Finally, we examine state-dependent importance sampling in the context of this problem and comment on statistical aspects of our algorithm.

 
10:25-10:50 de Boer, PT; Scheinhardt, WRW (Twente)
  A new, analysis-based, change of measure for tandem queues Sem 1
 

We introduce a simple analytical approximation for the overflow probability of a two node tandem queue. From this, we derive a change of measure, which turns out to have good performance in almost the entire parameter space. The form of our new change of measure sheds an interesting new light on earlier changes of measure for the same problem, because here the transition zone from one measure to another - that they all have - arises naturally.

 
10:50-11:15 Morning Coffee
11:15-11:40 Rubinstein, R (Israel Institute of Technology)
  Stochastic enumeration method for rare events, counting and combinatorial optimization Sem 1
 

We present a new method for rare-event probability estimation, combinatorial optimization and counting, called the stochastic enumeration (SE) method. In terms of counting, SE presents a stochastic replica of the naive full enumeration method. It is well known that the latter is typically meaningless since the associated counting sets, such as the sets of feasible solutions of the integer programming constraints, are huge. The SE method overcomes this difficulty by using a manageable sample size. We show how to implement the SE method for some well known difficult counting problems, such as self-avoiding walks, 0-1 tables and satisfiability problems, discuss its convergence, and present numerical studies demonstrating its superiority to the classic splitting method.

 
11:40-12:05 Ridder, A; Tuffin, B (Amsterdam; Rennes)
  Efficiency of the cross-entropy method in Markov chain problems Sem 1
 

Consider the problem of estimating the probability ?(0) that a discrete-time Markov chain (X(t))1t=0 on a finite state space X, with transition probabilities (p(x, y))x,y2X , reaches a failure set F from a reference state 0 before returning to the reference state. We assume that this is a rare event, where the rarity parameter n might be the population size in queueing applications, or n = 1/o in reliability applications, with o the failure rate. For simulation we apply importance sampling by generating sample paths of the Markov chain using transition probabilities pIS(x, y). Let Y IS n be the resulting estimator of ?(0). In this paper we investigate the efficiency of the estimator, in terms of bounded relative error, and logarithmically bounded relative error, defined by limsup n!1 p Var[Yn] E[Yn] < 1, and liminf n!1 log p Var[Yn] logE[Yn]  1, respectively. Optimal but not implementable would be to use the zero-variance transition probabilities pZV(x, y) = p(x, y)?(y)/?(x), where ?(x) is the probability of reaching the failure set before the reference state when starting in state x. Then the importance sampling estimator has zero variance. Suppose that we approximate ?(x) by ?APP(x), for any state x. L’Ecuyer and Tuffin (2009) gave conditions for the approximated transition probabilities under which the associated importance sampling estimator shows bounded relative error. In this paper we analyse the cross-entropy method for estimating the zero-variance transition probabilities. For that purpose we first show that the zero-variance transition probabilities satisfy pZV(x, y) = E[1{X(T) 2 F}N(x, y)|X(0) = 0] E  1{X(T) 2 F} P z2X N(x, z)|X(0) = 0  , where T is the first entrance time in F [ {0}}, and N(x, y) is the number of times that transition (x, y) has occured. The cross-entropy method provides a procedure for estimating the expectations in this expression, resulting in estimators pEST(x, y) of the zero-variance transition probabilities. Let pCE(x, y) be realised values of these estimators, based on finite sample sizes. Then we give a sufficient condition for the associated estimator Y CE n to show logarithmically bounded relative error. The condition is in terms of the Kullback-Leibler distance D(PZV, PCE) between the underlying probability measures that are associated with these changes of measure. Moreover, we propose a probabilistic condition for the estimators pEST(x, y) such that we obtain the bounded relative error property probabilistically. This means the following, the condition gives us a probability a, such that with probability a the estimator Y CE n of ?(0) has bounded relative error. The proof is based on checking the conditions of L’Ecuyer and Tuffin (2009).

 
12:05-12:30 Lam, H (Harvard)
  Rare-Event Simulation for Markov-Modulated Perpetuities Sem 1
 

In this talk I will discuss two rare-event simulation problems with both heavy-tailed and Markovmodulated features. Both of which involve the use of Lyapunov inequality and state-dependent sampling that relies crucially on the regenerative feature of the modulating process. The first problem is the computation of large deviations probability of a perpetuity in which both the cash flow and discount rate (can be positive or negative and unbounded) are modulated by underlying Markov economic environment. Despite our assumption that discount rate possesses finite exponential moment, the functional form of perpetuity in terms of discount rate leads to power-law decay. The problem is interesting from a simulation standpoint because of the occurrence of both light and heavy-tailed features: the existence of exponential moment allows us to perform exponential tilting on the discount rate, yet the power-law decay suggests the one-big-jump intuition of heavytailed processes and hence a careful Lyapunov-type analysis is required to guarantee asymptotic optimality. The second problem that we consider is the computation of the first passage probability for a Markov-modulated random walk. In this case a generalized version of Lyapunov inequality is considered, through breaking down the random walk into regenerative cycles. Asymptotic optimality of the algorithm is obtained by careful tuning of parameters that depend on the initial state of the current cycle as well as the current state of the walk.

 
12:30-13:30 Lunch at Wolfson Court
14:00-14:50 Hult, H; Svensson, J (KTH, Stockholm)
  Computing risk measures by importance sampling Sem 1
 

Computation of extreme quantiles and tail-based risk measures using standard Monte Carlo simulation can be inefficient. A method to speed up computations is provided by importance sampling. We show that importance sampling algorithms, designed for e¢ cient tail probability estimation, can signi.cantly improve Monte Carlo estimators of tail-based risk measures. In the heavy-tailed setting, when the random variable of interest has a regularly varying distribution, we provide su¢ cient conditions for the asymptotic relative error of importance sampling estimators of risk measures, such as Value-at-Risk and expected shortfall, to be small. The results are illustrated by some numerical examples.

 
14:50-15:15 Guyader, A; Hengartner, NW; Matzner-Løber, E (Rennes 2; Los Alamos; Rennes 2)
  Iterative Monte Carlo for extreme quantiles and extreme probabilities
 

Let $X$ be a random vector with distribution $\mu$ on ${\mathbb R}^d$ and $\Phi$ be a mapping from ${\mathbb R}^d$ to ${\mathbb R}$. That mapping acts as a black box, e.g., the result from some computer experiments for which no analytical expression is available. This paper presents an efficient algorithm to estimate a tail probability given a quantile or a quantile given a tail probability. It proceeds by successive elementary steps, each one being based on Metropolis-Hastings algorithm. The algorithm improves upon existing multilevel splitting methods and can be analyzed using Poisson process tools that lead to exact description of the distribution of the estimated probabilities and quantiles. The performance of the algorithm is demonstrated in a problem related to digital watermarking.

 
15:15-15:40 Nakayama, M; Chu, F (New Jersey Institute of Technology)
  Constructing confidence intervals for quantiles when using variance-reduction techniques Sem 1
 

Consider a random variable $X$ having cumulative distribution function (CDF) $F$. For $0 < p < 1$, the $p$-quantile of $X$ is defined to be $\xi_p = F^{-1}(p) = \inf\{ x : F(x) \geq p \}$. Quantiles are often used to assess risk. For example, a project planner may be interested in the $0.95$-quantile $\xi_{0.95}$ of the time to complete a project, so there is a $95\%$ chance that the project will complete by time $\xi_{0.95}$. In the finance industry, a quantile is known as a value-at-risk (VaR), and VaRs are widely employed as measures of portfolio risk. We develop methods to construct an asymptotically valid confidence interval for $\xi_p$ when applying a variance-reduction technique (VRT). We establish our results within a general framework for VRTs, which encompasses antithetic variates (AV), control variates (CV), and a combination of importance sampling and stratified sampling (IS+SS).

The basic method to construct a point estimator for $\xi_p$ is as follows. Run a simulation applying a VRT, and use the output to construct an estimator $\tilde{F}_n$ of the CDF $F$, where $n$ denotes the computational budget (e.g., sample size). The VRT quantile estimator is then $\tilde{\xi}_{p,n} = \tilde{F}_n^{-1}(p)$. Our framework specifies a set of assumptions on $\tilde{F}_n$, which we show holds for AV, CV, and IS+SS.

To produce a confidence interval for $\xi_p$, we first prove that the VRT quantile estimator satisfies Ghosh's~\cite{Ghos:1971} weaker form of a Bahadur representation~\cite{Baha:1966}, which implies $\tilde{\xi}_{p,n}$ obeys a central limit theorem (CLT). The variance constant $\kappa_p^2$ in this CLT can be expressed as $\kappa_p = \psi_p \phi_p$, where $\psi_p$ depends on the VRT applied but $\phi_p = 1/f(\xi_p)$ is independent of the estimation technique. It turns out that $\psi_p^2$ is the variance constant appearing in the CLT for $\tilde{F}_n(\xi_p)$, and defining a consistent estimator of $\psi_p$ is straightforward. The main issue is providing a consistent estimator of $\phi_p$ when applying a VRT, and we derive such an estimator using the Bahadur representation.

References
  • [1] R.R. Bahadur. A note on quantiles in large samples. Annals of Mathematical Statistics, 37:577--580, 1966.
  • [2] J.K. Ghosh. A new proof of the Bahadur representation of quantiles and an application. Annals of Mathematical Statistics, 42:1957--1961, 1971.

 
15:40-16:15 Afternoon Tea
16:15-17:30 Poster session for Simulation of Networks event
17:30-18:30 Welcome Wine Reception
18:45-19:30 Dinner at Wolfson Court
Tuesday 22 June
Session: RESIM Day 2
09:00-09:50 Dupuis, P; Cai, Y (Brown)
  Analysis of an interacting particle scheme for rare event estimation Sem 1
 

A number of schemes for Monte Carlo estimation of rare events are based on splitting particles when they reach certain thresholds. Among these is the interacting particle scheme introduced in [1] and also discussed at recent RESIM conferences. A feature of this approach that is considered attractive is that the total number of particles (and hence the total computational effort needed to generate a sample) is controlled. Although this scheme has been observed to perform well as the probability being estimated gets small, prior analysis has tended to focus on limits where the number of particles gets large with the probability held fixed. One reason is that all particles are statistically re-coupled at each threshold, and hence limits where the number of particles is fixed and the probability gets small are difficult to analyze. We introduce some new techniques for the large deviation analysis of such systems. Although the large deviation scaling for the probability of interest is what is known as a “small noise” large deviation limit, the analysis of interacting particles systems requires ideas from the large deviation theory for occupation measures of Markov chains. [1] P. Del Moral and J. Garnier. Genealogical particle analysis of rare events. Ann. Appl. Probab., 15:2496–2534, 2005.

 
09:50-10:15 Cerou, F; Guyader, A; Lelièvre, T (France)
  An adaptive replica approach to simulate reactive trajectories Sem 1
10:15-10:40 Krystul, J; Le Gland, F; Lezaud, P (Twente; Rennes; Toulouse)
  Sample per mode simulation for switching diffusions Sem 1
 

We consider the problem of rare event estimation in switching diffusions using an Interacting Particle Systems (IPS) based Monte Carlo simulation approach \cite{DelMoral}. While in theory the IPS approach is virtually applicable to any strong Markov process, in practice the straightforward application of this approach to switching diffusions may fail to produce reasonable estimates within a reasonable amount of simulation time. The reason is that there may be few if no particles in modes with small probabilities (i.e.\ "light" modes). This happens because each resampling step tends to sample more "heavy" particles from modes with higher probabilities, thus, "light" particles in the "light" modes tend to be discarded. This badly affects IPS estimation performance. By increasing the number of particles the IPS estimates should improve but only at the cost of substantially increased simulation time which makes the performance of IPS approach in switching diffusions similar to one of the standard Monte Carlo. To avoid this, a conditional "sampling per mode" algorithm has been proposed in \cite{Krystul}; instead of starting the algorithm with $N$ particles randomly distributed, we draw in each mode $j$, a fixed number $N^j$ particles and at each resampling step, the same number of particles is sampled for each visited mode. Using the techniques introduced in \cite{LeGland}, we recently established a Law of Large Number theorem as well as a Central Limit Theorem for the estimate of the rare event probability. \bibliographystyle{plain} %\bibliography{} \begin{thebibliography}{3} \bibitem[Del Moral \& Lezaud, 2006]{DelMoral} Del Moral, P. and Lezaud, P., \newblock Branching and interacting particle interpretations of rare event probabilities. (2006) \newblock {\em in Stochastic Hybrid Systems : Theory and Safety Critical Applications, Henk Blom and John Lygeros, editors, Lecture Notes in Control and Information Sciences 337, pp. 277--323, Springer, Berlin, 2006} \bibitem[Krystul, 2006]{Krystul} Krystul, J., \newblock Modelling of Stochastic Hybrid Systems with Applications to Accident Risk Assessment (2006). \newblock {\em PhD Dissertation: University of Twente The Netherlands}. \bibitem[LeGland \& Oudjane, 2006]{LeGland} LeGland, F. and Oudjane, N., \newblock A sequential particle algorithm that keeps the particle system alive. (2006) \newblock {\em in Stochastic Hybrid Systems : Theory and Safety Critical Applications, Henk Blom and John Lygeros, editors, Lecture Notes in Control and Information Sciences 337, pp. 351--389, Springer, Berlin, 2006

 
10:40-11:15 Morning Coffee
11:15-11:40 Villén-Altamirano, J; Villén-Altamirano, M; Vázquez Gallo, E (Technical University of Madrid)
  RESTART simulation of non-Markovian queuing networks Sem 1
 

The performance requirements of broadband communication networks are often expressed in terms of events with very low probability. Analytical or numerical evaluation is only possible for a very restricted class of systems. Crude simulations require prohibitively long execution times for the accurate estimation of very low probabilities, and thus acceleration methods are necessary. A more frequent occurrence of a formerly rare event is achieved by performing a number of simulation retrials when the process enters regions of the state space where the importance is greater, i.e., regions where the chance of occurrence of the rare event is higher. These regions, called importance regions, are defined by comparing the value taken by a function of the system state, the importance function, with certain thresholds. Formulas for the importance function of general Jackson networks in [1]. In [2] networks with Erlang service times with different shape parameters were studied. The rare set was defined as the number of customers in a target node exceeding a predefined threshold. Two models were studied: a network with 7 nodes all of them at \textquotedblleft distance\textquotedblright\ 1 or 2 from the target node and a 3-queue tandem network with the loads of the first and second queue much greater than the load of the third queue. Low probabilities were accurately estimated within short computational times in both models. In this paper we extend the simulation study made in [2] in a twofold direction. On the one hand we also simulate two additional types of networks that also could have difficulties for rare event simulation: a large network with 15 nodes, some of them at \textquotedblleft distance\textquotedblright\greater than 2, and a network with 2 nodes and very strong feedback. On the other hand we use different non-exponential distributions as hyperexponential and Erlang for modelling the interarrival and/or service times. This study will give us more insight for finding importance functions that could lead to good estimates of the probability of interest in most networks. %\bibliography{} [1] Vill\'{e}n-Altamirano J. 2010. Importance functions for RESTART simulation of general Jackson networks. European Journal of Operation Research, 203 (1): 156-165. [2]Vill\'{e}n-Altamirano J. 2009. RESTART Simulation of Networks of Queues with Erlang Service Times. Proc. Winter Simulation Conference, Austin (USA), 1146-1154.

 
11:40-12:05 Reijsbergen, D; de Boer, P-T; Scheinhardt, W (Twente)
  Transient behaviour in highly dependable Markovian systems, new regimes, many paths Sem 1
 

In recent years, probabilistic analysis of highly dependable Markovian systems has received considerable attention. Such systems typically consist of several component types, subject to failures, with spare components for replacement while repair is taking place. System failure occurs when all (spare) components of one or several types have failed. In this work we try to estimate the probability of system failure before some fixed time bound $\tau$ via stochastic simulation. Obviously, in a highly dependable system, system failure is a rare event, so we apply importance sampling (IS) techniques, based on knowledge of the behaviour of the system and the way the rare event occurs. Interestingly, we can discern quite a few different situations to explain why system failure is rare, each with its own typical way of how the rare event is reached, namely: (1) low component failure rates, (2) small value of $\tau$, (3) many spare components and (4) high component repair rates. Each of these can be considered as a limiting regime in which some model parameter tends to $0$ or infinity. Classifying this parameter as the `\emph{rarity parameter}', we can measure the performance of an IS scheme by how well it does in the asymptote involved. We could also combine regimes, which sometimes leads to new cases and sometimes not (e.g. the limit in which both failure and repair rates become small is equivalent to $\tau$ becoming small). For cases (1) and (2), a combination of balanced failure biasing and forcing was proven to have bounded relative error in \cite{shahabuddin1994importance}. In \cite{deboer2007estimating} an alternative estimator was proposed, based on the dominant path to failure, the idea being that when an event is rare, deviations from the most likely path to this event become even more rare. However, in several model checking problems an analysis based on dominant paths fails to identify a well-performing change of measure. The reason is that the contribution of some other paths to the probability of interest is too large to neglect, or, more formally speaking, that the contribution of these paths does not vanish asymptotically. In our paper, we first prove that in the asymptote of case (3), which is interesting in its own right, the dominant path to failure indeed does determine the entire rare event, as in cases (1) and (2). Then we demonstrate that this is not true for case (4). We propose a state- and time-dependent change of measure for a simple, yet nontrivial, model. Our measure is based on the one in \cite{deboer2007estimating} and takes all paths into account that contribute to the probability of interest. Finally, we empirically verify that our estimators have good performance. [1] P.T. de Boer, P. L'ecuyer, G. Rubino, and B. Tuffin. Estimating the probability of a rare event over a finite time horizon. In Proceedings of the 2007 Winter Simulation Conference, pages 403-411, 2007. [2] P. Shahabuddin. Importance

 
12:05-12:30 Tuffin, B; Cancela, H; Rubino, G (Rennes; Uruguay; Rennes)
  Recursive variance reduction estimators for the static communication network reliability problem Sem 1
 

The exact evaluation of static network reliability parameters belongs to the NP-hard class of problems and Monte Carlo simulation is therefore a relevant tool to provide their evaluations [2]. The first goal of this presentation is to review a Recursive Variance Reduction (RVR) estimator for the unreliability parameter which works by recursively reducing the graph based on a random choice (following a natural distribution) of the first working link on selected cuts [1]. We show that the method does not verify the bounded relative error (BRE) property [3] as the reliability of individual links goes to one, i.e., that the estimator is not robust to high reliability of links. We then propose to use the RVR principle in conjunction with the importance sampling technique. Two new estimators are presented: the first one, called Balanced RVR, follows an uniform distribution to choose the first working link on cuts, while the second, called Zero-Variance Approximation RVR, tries to mimic the distribution used by the (ideal) estimator with variance zero [4]. We show that in both cases the BRE property is verified and, moreover, that a Vanishing Relative Error property can be obtained by the Zero-Variance Approximation RVR under sufficient conditions. Finally, numerical illustrations of the power of both methods are provided. [1] H. Cancela and M. El Khadiri. A recursive variance-reduction algorithm for estimating communication-network reliability. IEEE Tr. on Reliability, 44(4):595–602, 1995. [2] H. Cancela, M. El Khadiri, and G. Rubino. In G. Rubino and B. Tuffin, editors, Rare Event Simulation using Monte Carlo Methods, chapter Rare events analysis by Monte Carlo techniques in static models, pages 145–170. John Wiley & Sons, 2009. [3] P. L’Ecuyer, J. H. Blanchet, B. Tuffin, and P. W. Glynn. Asymptotic robustness of estimators in rare-event simulation. ACM Tr. on Modeling and Computer Simulation, 2010. To appear. [4] P. L’Ecuyer, G. Rubino, S. Saggadi, and B. Tuffin. Approximate zero-variance importance sampling for static network reliability estimation. Submitted. Available at http://www. irisa.fr/dionysos/pages_perso/tuffin/Publis/static-IS.pdf, 2010.

 
12:30-13:30 Lunch at Wolfson Court
14:15-15:05 Wang, H; Setayeshgar, L (Brown)
  Efficient important sampling for a feed-forward network Sem 1
 

We consider a feedforward network with a single server station serving two classes of jobs, where one class of jobs has preemptive priority over the other. The rare event of interest is total population overflow. We rigorously identify the large deviation rate of the rare event probabilities and construct piecewise constant, asymptotically optimal importance sampling schemes.

 
15:05-15:30 Addie, RJ; Pao, DCW; Wong, EWM (Southern Queensland: Hong Kong)
  Snapshot simulation - an importance sampling technique for traffic with heavy tailed flows Sem 1
 

Two key ideas have informed importance sampling and rare event sampling of traffic models in the scientific literature: (i) distortion of probabilities together with use of the likelihood ratio to correct the collected statistics, and (ii) the restart method, or particle methods, in which many simulation threads are run simultaneously and the threads which focus on the rare events of interest are selected, and the collected statistics are corrected by keeping an appropriate weight for each thread. However, these methods fail when applied to traffic processes with heavy-tailed flow lengths because the threads must be very long in order to adequately explore the state space of the system, including the region containing the rare events whose probability we wish to measure. In this paper a new method which is able to quickly simulate very long simulation threads will be described [1, 2]. In this method all threads are infinitely long but the level of detail in the thread reduces (to zero, in the limit) as the time coordinate of the thread approaches -1. Threads may be distorted by a likelihood ratio, and threads may be selected and duplicated, so this method is complementary to the two importance sampling techniques described above. In this paper the method will be extended to allow observation of delays between successive events as well as merely the statistics of the state of the system at a certain point in time. This requires care to ensure that the events observed are not unduly affected by the omitted details. [1] R. G. Addie, “Snapshot simulation of internet traffic: fast and accurate for heavy-tailed flows,” in Proceedings of the 1st International Workshop on the Evaluation of Quality of Service through Simulation in the Future Internet. March 2008, ICST. [2] R. G. Addie, “Snapshot simulation of internet traffic: queueing of fixed-rate flows,” in Proceedings of the 2nd International Workshop on the Evaluation of Quality of Service through Simulation in the Future Internet. March 2009, ICST.

 
15:30-16:15 Afternoon Tea
16:15-17:05 Meyn, S; Duffy, KR; Kontoyiannis, I (Illinois; Ireland; Greece)
  Simulating the mean of a skip free Markov chain Sem 1
17:05-17:30 Adler, RJ; Blanchet, JH; Liu, J (Israel; Columbia; Columbia)
  Efficient Monte Carlo for high excursions of Gaussian random fields Sem 1
 

We focus on the design and analysis of efficient Monte Carlo methods for computing the tail probability at level b of the maximum of a Gaussian random field and the associated conditional expectations of the field given excursions at levels larger than b. Na¨ýve Monte Carlo takes an exponential computational cost in b to estimate such tail probabilities and associated conditional expectations with prescribed relative accuracy. In contrast, our Monte Carlo procedures exhibit at the most polynomial complexity in b assuming only that the mean and covariance functions are H¨older continuous. In presence of more regularity conditions, such as homogeneity and smoothness, the complexity results can be further improved to constant. Central to the design of Monte Carlo scheme and its efficiency analysis is a change of measure that is NOT of the traditional exponential tilting form. This change of measure admits different representations that link the analysis of Monte Carlo methods to the random field’s geometry. This feature is appealing to both simulation design and theoretical development of random fields.

 
18:45-19:30 Dinner at Wolfson Court
Wednesday 23 June
09:00-09:50 Mandjes, M; Glynn, P (Stanford; Amsterdam)
  Simulation-based computation of the workload correlation function in a Lévy-driven queue Sem 1
 

In this paper we consider a single-server queue with Lévy input, and in particular its workload process $(Q_t)_{t\ge 0}$, focusing on its correlation structure. With the correlation function defined as $r(t):= {\mathbb C}{\rm ov}(Q_0,Q_t)/{\mathbb V}{\rm ar}\, Q_0$ (assuming the workload process is in stationarity at time 0), we first study its transform $\int_0^\infty r(t) e^{-\vartheta t}{\rm d}t$, both for the case that the Lévy process has positive jumps, and that it has negative jumps. These expressions allow us to prove that $r(\cdot)$ is positive, decreasing, and convex, relying on the machinery of completely monotone functions. For the light-tailed case, we estimate the behavior of $r(t)$ for $t$ large. We then focus on techniques to estimate $r(t)$ by simulation. Naive simulation techniques require roughly $(r(t))^{-2}$ runs to obtain an estimate of a given precision, but we develop a coupling technique that leads to substantial variance reduction (required number of runs being roughly $(r(t))^{-1}$). If this is augmented with importance sampling, it even leads to a logarithmically efficient algorithm. We present a set of simulation experiments, underscoring the superior performance of our techniques.

 
09:50-10:40 Griffiths, R (Oxford)
  Simulation of ancestral histories of genes Sem 1
 

An empirical distribution of the stochastic history of a sample of genes, conditional on their types, can be found by an advanced simulation technique of sequential importance sampling (SIS) on coalescent histories. If the sample data are DNA sequences, then the pattern of mutations provides information about the ancestral tree of the data. For example, empirical distributions of the time to the most recent common ancestor of the genes and ages of mutations, conditional on the tree topology, can be found. Coalescent processes looking back in time are dual processes to diffusion process models of how gene frequencies evolve forward in time. Proposal distributions for SIS in coalescent histories can be constructed from the generator of the diffusion process. This talk will sketch the SIS technique and present examples of ancestral inference.

 
10:40-11:10 Morning Coffee
11:10-12:00 Rosenthal, JS (Toronto)
  Optimising and adapting the Metropolis algorithm Sem 1
 

The Metropolis algorithm is a very popular method of approximately sampling from complicated probability distributions, especially those arising in Bayesian inference. A wide variety of proposal distributions are available, and it can be di¢ cult to choose among them. We will discuss optimal proposals under various circumstances. We will also consider the possibility of having the computer automatically "adapt" the algorithm while it runs, to improve and tune on the fly.

 
12:30-13:30 Lunch at Wolfson Court
13:40-14:30 Roberts, G; Papaspiliopoulos, O (Warwick; Pompeu Fabra)
  Retrospective simulation and the Bernoulli factory Sem 1
 

Retrospective simulation techniques offer flexible and powerful methods for enhancing well-established simulation tools such as Rejection Sampling, Importance Sampling, MCMC and Sequential Monte Carlo. Special cases have been known for a while (for instance coupling from the past for simulating from Markov chain stationary distributions). This presentation will touch on a number of applications of the methodology, including the exact simulation of diffusion sample paths, and other (apparently) infinite-dimensional simulation problems. The second half of the talk will present joint work with Krzysztof Latuszynski and Ioannis Kosmidis on a solution to the well-known Bernoulli factory problem: given a black box for generating from events of probability p, how can we construct a black box to generate events of probability f(p). Beskos, A., Papaspiliopoulos, O. and Roberts, G.O. Retrospective Exact Simulation of Diffusion Sample Paths with Applications, Bernoulli, 12, 6, 1077-1098, 2006. Papaspilioulos, O. and Roberts, G.O. Retrospective Markov chain Monte Carlo methods for Dirichlet process hierarchical models Biometrika, 95, 169–186, 2008. Latuszyinski, K., Kosmidis, I., Papaspiliopoulos, O. and Roberts, G.O. Simulating events of unknown probabilities via reverse time martingales, to appear in Random Structures and Algorithms, 2010.

 
14:30-18:00 Football
19:30-22:30 Hog Roast Garden Party at The Moller Centre

Back to top ∧