# Workshop Programme

## for period 11 - 15 January 2010

### New Topics at the Interface between Probability and Communications

11 - 15 January 2010

Timetable

Monday 11 January | ||||

08:30-09:30 | Registration | |||

09:30-09:35 | Welcome from David Wallace (INI Director) | |||

09:35-10:35 | Ganesh, A (Bristol) |
|||

Decentralised load balancing in closed and open systems | Sem 1 | |||

We study the performance of random load resampling strategies in parallel server systems. Clients initially attach to an arbitrary server, but may switch server independently at random instants of time in an attempt to improve their service rate. Load resampling is particularly relevant in scenarios where clients cannot predict the load of a server before being actually attached to it. We derive tight estimates of the time it takes for a given resampling strategy to achieve a perfect balance of the load across servers in a closed system. We also study open systems where clients arrive according to a random process and leave upon service completion. In this scenario, we characterize the stability region of various resampling strategies, and derive approximate estimates of the sojourn time obtained by letting the number of servers grow large. |
||||

10:35-11:00 | Coffee | |||

11:00-12:00 | Medard, M (MIT) |
|||

Some new(ish) results in information theory | Sem 1 | |||

12:15-13:30 | Lunch | |||

14:00-15:00 | Prabhakar, B (Stanford) |
|||

The Struggle for Independence | Sem 1 | |||

Network theorists seek accurate, parsimonious stochastic models of large-scale networks. A very useful property for obtaining such models is the statistical independence of the congestion measures at the different nodes of the network. This property allows a large-scale system to be decomposable into probabilistically independent modules, facilitating analysis. The successful Jackson and Kelly network models possess this property. However, modern network systems, such as load balancers, switching fabrics and the Internet, do not possess this property. A new approach is needed. Using recent ideas from statistical physics, notably the Cavity Method, we describe a framework for analyzing large-scale complex networks. We apply this framework to randomized load balancing systems, generalizing the class of tractable models. A key observation is that, in order to be scalable, modern networks are designed so that the interaction between the components is kept to a minimum, making the components only weakly dependent. Thus, any finite subset of the components become independent as the network size grows without bound. We describe the steps involved in establishing independence and list a few practical consequences. Joint work with Maury Bramson and Yi Lu |
||||

15:00-15:30 | Tea | |||

15:30-16:30 | Resnick, S (Cornell) |
|||

Modeling data network sessions | Sem 1 | |||

A session is a higher order entity resulting from amalgamating packets, connections, or groups of connections according to specified but not unique rules. For example, using various rules, the flow of packets past a sensor can be amalgamated into higher level entities called sessions using a threshold rule based on gaps between packet arrivals. We rapidly review some probability modeling based on sessions before turning to statistical analysis. Statistical analysis of these sessions based on packets is complex: session duration (D) and size (S) are jointly heavy tailed but average transmission rate (R=S/D) is sometimes not heavy tailed and arrival times of sessions is not Poisson. By segmenting sessions using a peak rate covariate, we find conditional on a peak rate decile, within this decile segment session initiations can be modeled as Poisson. For modeling the distribution of (D,S,R), the conditional extreme value (CEV) model may be a useful variant. (Joint work at various times with Jan Heffernan, Bikramjit Das, Luis Lopez-Oliveros, T. Mikosch, Bernardo D'Auria, H. Rootzen, A. Stegemen.) |
||||

17:30-18:30 | Welcome Wine Reception | |||

18:30-19:30 | Dinner |

Tuesday 12 January | ||||

09:30-10:30 | Montanari, A (Stanford) |
|||

Iterative Algorithms | Sem 1 | |||

The problem of estimating a high dimensional vector from a set of linear observations arises in a number of engineering disciplines. It becomes particularly challenging when the underlying signal has some non-linear structure that needs to be exploited. I will present a new class of iterative algorithms inspired by probabilistic graphical models ideas, that appear to be asymptotically optimal in specific contexts. I will discuss in particular the application to compressed sensing problems. [Joint work with David L. Donoho and Arian Maleki] |
||||

10:30-11:00 | Coffee | |||

11:00-12:00 | Tsitsiklis, J (MIT) |
|||

Some models of information aggregation and consensus in networks | Sem 1 | |||

A primary function of many engineered and social networks is to aggregate the information obtained by the nodes of a network. We discuss a few of the models and thrusts that have been studied, starting with a model of "social learning" by rational (Bayesian) agents, and its connections with information fusion models in the engineering literature. We then consider a set of agents (processors, decision makers, sensors, etc.) who reach consensus through an iterative process involving the exchange and averaging of local values. We discuss a number of models, application contexts, and convergence results, and the connections with Markov chain theory. |
||||

12:15-13:30 | Lunch | |||

14:00-16:30 | Open problems session 1 | |||

18:30-19:30 | Dinner |

Wednesday 13 January | ||||

09:30-10:30 | Poster Session | |||

10:30-11:00 | Coffee | |||

11:00-12:00 | Kumar, PR (Illinois) |
|||

A Formulation and Theory for Delay Guarantees in Wireless Networks | Sem 1 | |||

Delay guarantees have been problematic in networking. The usual focus of theory is only on providing throughput guarantees. Yet, wireless networks will increasingly need to support applications requiring such guarantees, e.g., voice-over-IP, interactive video, and control over networks. We propose a theoretical framework for addressing the problem of delay guarantees in wireless networks that incorporates three key issues - delay, throughput, and channel reliability - in the specification of quality of service. A somewhat surprising necessary and sufficient condition characterizes when the quality of service requirements of a given set of nodes can be met. It can be checked in nearly linear time, providing a tractable admission control algorithm. Further, there are easily implementable scheduling policies that are feasibility optimal in the sense that they can meet the demands of every feasible set of nodes. The theory can be extended to more general arrival patterns and fading processes, and can also be cast in a utility maximization framework for delay guarantees. [Joint work with I-Hong Hou and V. Borkar] |
||||

12:15-13:30 | Lunch | |||

14:00-19:30 | Free time | |||

19:30-21:00 | Conference Dinner at Selwyn College |

Thursday 14 January | ||||

09:30-10:30 | Wischik, D (UCL) |
|||

Two new problems in congestion control: MAC3, and restless bandits | Sem 1 | |||

1. Medium Access Control (MAC) in a wireless network is meant to manage contention between stations that all want access to the shared wireless medium. I will describe a different approach, Medium Access Coding and Congestion Control (MAC3), inspired by the Zigzag algorithm of Gollakota and Katabi (2008). I will argue that MAC3 should be more efficient than contention-based MAC, and that it will automatically solve the problems of "hidden terminals" and "exposed terminals". (Joint work with Devavrat Shah.) 2. At UCL we are implementing a multipath version of TCP. We found that our naive translation of fluid models into packet-level algorithms gave very unsatisfactory performance, since it did not take account of the stochastic nature of congestion feedback. I propose that we should instead derive congestion control algorithms through dynamic programming, in particular using a restless bandit model. I will demonstrate a modified TCP derived through dynamic programming, and I will discuss open issues. Links to presentations: |
||||

10:30-11:00 | Coffee | |||

11:00-12:00 | Tse, D (UC, Berkeley) |
|||

Information Theory of Wireless Networks: Approximate Max-Flow Min-Cut | Sem 1 | |||

A holy grail of information theory is to extend Shannon's original point-to-point results to wireless networks. Despite significant efforts in the past 40 years, progress has been slow. Unlike wired networks, signals from different nodes interact in a complex way in wireless channels in addition to noise. We discuss a recent approach based on approximating the noisy wireless channels by deterministic ones while capturing the signal interactions. In the case of single source and single destination, we use this approach to prove an approximate max-flow min-cut theorem for wireless networks. |
||||

12:15-13:30 | Lunch | |||

14:00-16:30 | Open problems session 2 | |||

18:30-19:30 | Dinner |

Friday 15 January | ||||

09:30-10:30 | Orlitsky, A (UC, San Diego) |
|||

Probability Estimation over Large Alphabets | Sem 1 | |||

Many applications require estimating distributions over large alphabets based on a small data sample. We outline the problem's history, theory, and applications, and describe recent constructions of asymptotically optimal estimators. The talk is self contained and based on work with P. Santhanam, K. Viswanathan, J. Zhang, and others. |
||||

10:30-11:00 | Coffee | |||

11:00-12:00 | Walrand, J (UC, Berkeley) |
|||

Scheduling for Communication and Processing Networks | Sem 1 | |||

The talk reviews the history and recent results on the scheduling of tasks that require access to a set of resources. The first step was the discovery that maximum weighted matching achieves the maximum throughput for a class of such scheduling problems that includes wireless networks and packet switches. However, this algorithm is too complex to be directly applicable. The second step was understanding when a simpler algorithm, longest queue first, also achieves maximum throughput. The third step was inventing a distributed algorithm where tasks select an independent random back off delay before asking for the resources; this delay has a mean value that decreases exponentially with the backlog. Using stochastic approximation theory, one shows that this algorithm achieves the maximum throughput. The fourth step is a modification of maximum weight matching to achieve the maximum utility in a processing network where tasks not only share resources but also require access to parts. The talk explains the intuition behind the results and the main ideas of the proofs. |
||||

12:15-13:30 | Lunch | |||

14:00-15:00 | Ramanan, K (Carnegie Mellon) |
|||

Stochastic networks and measure-valued processes | Sem 1 | |||

Many stochastic networks with complex scheduling disciplines are hard to analyze using finite-dimensional state representations. In the context of two specific stochastic networks arising in applications, we show how measure-valued representations can be used to study stability properties as well as to obtain explicit expressions for performance measures in a suitable (mean-field or heavy traffic) asymptotic regime. |
||||

15:00-15:30 | Tea | |||

15:30-16:30 | Massoulie, L (THLAB) |
|||

Flows and matchings for P2P systems | Sem 1 | |||

In a first part of the talk, we consider the so-called live streaming problem, which consists in broadcasting a stream of information to a collection of network nodes, and we attempt to achieve this objective while minimizing network costs. To this end, we propose the so-called "Implicit primal-dual" scheme, whose simplicity makes it an interesting practical candidate. We characterize its behaviour at a fluid time scale, and obtain desirable properties when the scheme is combined with random linear coding. In a second part of the talk, we consider the so-called Video-on-demand problem, in which users request access to content items at arbitrary random times. In this context, we consider a simple user policy for managing which content to keep. We use the general framework of loss networks to model the resulting performance, and obtain optimality results on the ability of our scheme to enable users to serve each others' requests. |
||||

18:30-19:30 | Dinner |