2.00pm Phil Ansel (School of Mathematics & Statistics,
Newcastle University)
A general preemptive scheduling model with loss, restless bandits and
linear programming relaxations
2.00pm Augustin Chaintreau (Thomson Research Lab., Paris)
Last-Passage Percolation in Pattern Invariant Graphs,
and the Throughput of Large Distributed Systems
3.00pm Vishal Misra (Columbia University)
Modeling and Analysis of Generalized Slotted-Aloha MAC Protocols in
Cooperative, Competitive and Adversarial Environment
On a locally finite point set, a navigation defines a path through
the point set from a point to an other. The set of paths leading to
a given point defines a tree, the navigation tree. In this talk,
we will analyze the properties of the navigation tree when the point set
is a Poisson point process on R^d. We examine the distribution of
stable functionals, the local weak convergence of the navigation
tree, the asymptotic average of a functional along a path, the shape
of the navigation tree and its topological ends. We illustrate our
work in the small world graphs, and new results are established.
2.00pm Phil Ansel (School of Mathematics & Statistics,
Newcastle University)
In the first part of this talk we will consider a general preemptive
scheduling model with loss which provides for decisions about service
for a collection of N reward processes (or bandits). The goal of the
analysis is to develop policies which maximise the total expected
discounted reward earned from the provision of service over an infinite
horizon. For the second part of the talk we will discuss a mathematical
programming approach for the classical restless bandit problem.
2.00pm Augustin Chaintreau (Thomson Research Lab., Paris)
Many distributed systems can be described as an infinite collection of
tasks, interacting via a precedence rule: a given task starts as soon as
a set of neighboring others have been completed, it takes some time to
complete. Examples of those may be found in queuing theory, statistical
physics, or to study the performance of collaborative networking
protocols.
The global performance of such large distributed systems is
characterized by 1- the way its entities are organized (in a line, a
lattice, a tree, possibly randomly ...) 2- the local interaction between
neighboring tasks. Typical questions arising are "Can we characterize
the throughput of this system ?", "Is this throughput vanishing when the
system becomes larger ?", "Does this depend on the law of tasks
completion time ?".
This talk proves that for a light-tailed distribution of task completion
time, a simple condition (the sharpness criteria) characterizes the
systems with a scalable throughput. It then presents an application to
the stationary regime of infinite systems.
We propose a new scheme for content distribution of large files that is
based on network coding. With network coding, each node of the
distribution network generates and transmits encoded blocks of
information. We show that the randomization introduced by the coding
process eases the scheduling of block propagation and, hence, improves
the performance of the distribution. This is particularly important in
large unstructured overlay networks, where the nodes need to make
decisions based on local information only.
We will also report our experiences building and using a peer-to-peer
system that is based on network coding and propose a new efficient
mechanism for verifying encoded blocks.
We consider the problem of how throughput in a multihop wireless network
with randomly located nodes scales as the number of users n grows. Gupta
and Kumar found an upper bound on the actual throughput that each node
experiences. This bound, which is an essential limitation of such
networks, decreases asymptotically like 1/sqrt(n). We show that the
bound is tight, by giving a routing and scheduling scheme that can
achieve the same 1/\sqrt(n) per-node transmission rate.
This contrasts with previous achievability results suggesting that a
1/\sqrt(n log n) reduced rate is the price to pay for the randomness of
the node locations. The key arguments in bridging the gap between the
lower and upper bounds are provided by percolation theory. When the node
transmission ranges are too large, the network is fully connected but
generates excessive interference. In the short-range regime the network
looses connectivity.
Percolation theory ensures that a connected backbone forms in the
transition region between these two extreme scenarios. This backbone
does not include all the nodes, nevertheless it is sufficiently rich in
crossing paths so that it can transport the total amount of traffic.
Joint work with Massimo Franceschetti (UC San Diego), Olivier Dousse
(Deutsche Telekom Labs, Berlin) and David Tse (UC Berkeley).
Let Z be a two-dimensional Brownian motion confined to the
non-negative quadrant by oblique reflection at the boundary. The only
restrictions imposed on the process data are those required for Z to be
a semi-martingale and for Z to have a stationary distribution. Analytic
methods are used to derive a simple formula for the asymptotic decay
rate of the stationary distribution along each axis, and an
interpretation in terms of minimum-energy paths is provided.
Joint work with J. J. Hasenbein, L. A. Shepp and S. R. S. Varadhan
Aloha and its slotted variant are commonly deployed Medium Access
Control
(MAC) protocols in environments where multiple transmitting devices
compete for a medium, yet may have difficulty sensing each others
presence. We model and evaluate the throughput that can be achieved in a
system where nodes compete for bandwidth using a generalized version of
slotted- Aloha protocols.
We evaluate the channel utilization and fairness of these types of
protocols for a
variety of node objectives, including maximizing aggregate throughput of
the channel, each node greedily maximizing its own throughput, and
attacker nodes that attempt to jam the channel. If all nodes are selfish
and greedily attempt to maximize their own throughputs, a situation
similar to the traditional Prisoners Dilemma arises. Our results reveal
that under heavy loads, greedy strategies reduce the utilization, and
that
attackers cannot do much better than attacking during randomly selected
slots.
Joint work with Richard T.B. Ma and Dan Rubenstein
Please see also Past
Seminars.