This page collects some background notes and references for the Fulkerson Lectures, delivered at Cornell University in December 2002.
How should flows through a network be organized, so that the system responds sensibly to differing user requirements, fluctuating loads and failures? The question is currently of considerable technological importance for communication networks, while in various other forms it has a long history in the fields of physics and economics. There is particular interest in whether analogies with fundamental concepts such as energy and price can provide insights into the design of communication networks.
This talk will discuss the fairness of flow control algorithms in communication networks, the stability of dynamical system representations, and how a shadow price may be defined on the sample path of a queueing process. The overall aim will be to understand how microscopic packet level processes, of an apparently crude and statistical nature, may lead to sophisticated macroscopic behavior interpretable as the global optimization of a large-scale network.
Analogies from physics and economics (random
walks, electrical networks, variational principles;
road traffic, Braess paradox, user equilibrium versus system optimum)
and bistability of routing in circuit-switched networks
are discussed at length in
Network routing .
Philosophical Transactions of the
Royal Society A337 (1991) 343-367.
The models of flow control (primal and dual formulations;
Lyapunov functions; tatonnement) are from
Rate control in
communication networks: shadow prices, proportional
fairness and stability.
K, Aman Maulloo and David Tan
Journal of the Operational Research Society 49 (1998)
237-252.
Sample path shadow prices are discussed in
Resource pricing and the evolution of congestion control .
R.J. Gibbens and K
Automatica 35 (1999) 1969-1985.
Internet congestion control can be viewed as a resource allocation mechanism implemented via a distributed computation by end-systems. It provides a concrete, measurable example of an economic tatonnement process, with TCP's congestion avoidance algorithms playing the role of a ``Walrasian auctioneer" searching for market clearing allocations. What are the theoretical limits upon the scalability of such a distributed computation? In this talk we review recent progress, and discuss the constraints imposed by: limited information available to end-systems; the finite speed of light; inherent randomness; and incentive compatibility.
In the last few years there has been an explosion of work on the modelling of Internet congestion control. Good places to see the variety of approaches adopted are:
Massoulie and Roberts have introduced and studied a flow level model of Internet congestion control, that represents the randomly varying number of flows present in a network where bandwidth is dynamically shared between elastic document transfers.
In this talk the formalism of balanced fluid models and Brownian networks will be used to explore the behaviour of the flow level model in heavy traffic. Particular interest attaches to the phenomenon of entrainment, whereby congestion at some resources may prevent other resources from working at their full capacity.
Joint work with Ruth Williams.
The flow level Internet model was introduced in
Bandwidth sharing and admission control for elastic traffic
J. Roberts and L. Massoulie
Weighted alpha-fair allocations were introduced in
Fair end-to-end window-based congestion control
J. Mo and J. Walrand.
Positive recurrence of the Markov process is established by
Stability and performance analysis of networks supporting elastic
services
Gustavo De Veciana, Tae-Jin Lee and Takis Konstantopoulos
and
Impact of fairness on Internet performance
Thomas Bonald and Laurent Massoulie.
The example of what goes wrong without fairness comes from the latter
paper.