University of Cambridge > Mathematics > Statistical Laboratory > Richard Weber > Markov Chains

Markov Chains

Engel's probabilistic abacus

(a chip firing game for calculating absorption probabilities - see Appendix C)


This is home page for Richard Weber's course of 12 lectures to second year Cambridge mathematics students in autumn 2011. This material is provided for students, supervisors (and others) to freely use in connection with this course.

Feedback

This year's course finished on 15 November 2011. The written feedback questionnaire has been completed by 52 students, and a further 29 have completed the electronic form. If you have not yet given feedback you can do so on-line here. This will be sent to my email anonymously. After reading the responses, I will forward them to the Faculty Office.

Course notes

The course closely follows Chapter 1 of James Norris's book, Markov Chains, 1998 (Chapter 1, Discrete Markov Chains is freely available to download and I recommend that you read it.) I am also publishing some notes. Each lecture has notes of 3.5–4 pages.

These notes are now complete (subject to any small typos that may still be found). They include:

The notes have hyperlinks. If you click on an entry in the table of contents, or on a page number in the Index, you be taken to the appropriate page.

If you think you find a mistake in these notes, check that you have the most recent copy (as I may have already made a correction.) Otherwise, please let me know and I will incorporate that correction into the notes.

Here also are the overhead slides that I sometimes used in lectures.

Course Blog

There is a course blog in which I am writing a few comments after each lecture: to emphasize an idea, give a sidebar, correction (!), or answer an interesting question (that perhaps a student sends to me in email).

Comments, Discussion and Questions

There is also a comments thread at the end of this page, where you may make comments, or ask questions. Please read this thread. Some good questions have been asked, and answered.

Examples sheets

There are 2 examples sheets, each containing 13 questions, as well as 3 or 4 "extra" optional questions. The extra questions are about things that I think you will find interesting and which are off the well-beaten path. You should receive a supervision on each examples sheet.

Example sheet 1 (for Lectures 1 – 5)
Example sheet 2 (for Lectures 6 – 11)

New Note to supervisors

I have noticed a few errors in the "solutions" example sheet that I distributed to you (Questions 1.6 and 1.7). If you would like a corrected sheet, then please contact me. If you notice errors or have any comments for me on how your students are doing with these sheets, I would be glad to know.

Past exam questions

Here is single fille of all the tripos examination questions on Markov Chains for the years 2001-2011.
The questions prior to 2004 are from a former Part II course, so I have crossed out those that are not relevant to the present IB course.

Recommended books

  1. G.R. Grimmett and D.R. Stirzaker Probability and Random Processes. OUP 2001 (Chapter 6.1-6.5 is on discrete Markov chains.)
  2. J.R. Norris Markov Chains. CUP 1997 (Chapter 1, Discrete Markov Chains is freely available to download. I highly recommend that you read this chapter. If you need to brush up of your knowledge of how to solve linear recurrence relations, see Section 1.11.)

A comment on making good use of a lecturer's printed notes

I sometimes wonder whether it is helpful to publish full course notes. It is helpful that we can dispense with some tedious copying-out, and you are guaranteed an accurate account. But there are also benefits to hearing and writing down things yourself during a lecture, and so I hope you will still do some of that. I intend to say some things in every lecture that are not in the notes. In learning mathematics repeated exposure to ideas is helpful, so I hope that a combination of reading, listening, writing and solving problems will work well for you. I would be interested to hear from you if you would like to tell me what you think about this, and how you find best to use the notes that I am putting here.

When I was a student I developed a system, which worked well for me, of taking the notes that I had made during lectures and then re-writing them in the minimal number of pages that was sufficient for me to remember the essential content of the course. Here, for fun and some historical interest, are my notes for the 1972 course "Markov methods" as taught by David Kendall, and also my my summary revision notes. I condensed the course into 4 pages (on large computer line printer paper that we had in those days). Looking at these notes now, I see flaws - and I expect many of you can do much better in preparing your revision notes. Only the first 2 pages are on discrete-time Markov chains. The 1972 course also covered continuous-time Markov processes, which we now cover in the Part II course "Applied Probability" (but the 1972 IB course missed out many things that we are now doing, such as random walks and reversibility.) Incidentally, also for historical interest, Question #10 on Example Sheet 1 is a actually a tripos question from 1972, Paper IV, #10C.

Other resources and further reading

Sheldon Ross, Introduction to Probability Models, 2006 (Chapter 4) and Stochastic Processes,1995 (Chapter 4) (Each of these books contains a readable chapter on Markov chains and many nice examples. The author does a good job of making difficult concepts seem fairly simple.)  See also, Sheldon Ross and Erol Pekoz, A Second Course in Probability, 2007 (Chapter 5 gives a readable treatment of Markov chains and covers many of the topics in our course. In other chapters this book provides a gentle introduction to probability and measure theory.)

Charles Grinstead and Laurie Snell  Introduction to Probability Second edition, 1997 (freely available to download). This book is also quite easy to read. The authors have good insight and you will find some gems here.  Chapter 11 is on Markov Chains. This book it is particulary interesting about absorbing chains and mean passage times. There are many nice exercises, some notes on the history of probability,  and on pages 464-466 there is information about A. A. Markov and the early development of the field. There are two distinct approaches to the study of Markov chains. One emphasises probabilistic methods (as does Norris's book and our course); another is more matrix-based, as it this book. The probabilistic methods are more satisfying, but it is good to know something about the matrix methods too.)

Takis Konstantopoulos has a nice set of course notes on Markov Chains and Random Walks, 2009, at a somewhat more in-depth level than our course. You might like to browse through his notes.

David Aldous and Jim Fill  Reversible Markov Chains and Random Walks on Graphs, 1994-2001 (This monograph is freely available and a great souce of delightful information and insightful comment. It covers much more advanced topics that in our course. However, Chapters 1-3 are well within your grasp, and to read them will deepen your understanding.)

Peter Doyle and Laurie Snell Random walks and electric networks, 2006 (freely available to download. You could read this (easy to understand) paper to learn more about the interesting connection between recurrence/transience properties of random walks and resistence in electrical network, as I will briefly discuss in Lecture 12.

David Levin, Yuval Peres and Elizabeth Wilmer Markov Chains and Mixing Times, 2008. Chapters 1 and 2 nicely summarise most of the content of our course in a small number of pages. However, I reference this textbook mainly because it is a good place to read about some of the fascinating topics within the field of Markov chains that interest researchers today. For example, Chapter 5 on Coupling, will tell you how the ideas we used in Lecture 9 can be extended. This is a book you wiill want to read if ever go beyond undergraduate study in this field.

The PageRank citation ranking: bringing order to the web (by Lawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd, 1998) describes the workings of the Google PageRank algorithm and the part that Markov chains play in this. This is discussed in brief in Lecture 8, Example 8.6 (random surfer). See also the Wikipedia article about PageRank.

Peter Winkler, Mathematical Mind Benders, 2007. This is not a book on Markov Chains, but a collection of mathematical puzzles that I recommend. Many of the puzzles are based in probability. It includes the "Evening out the Gumdrops" puzzle that I discuss in lectures, and lots of other great problems. He has an earlier book also, Mathematical Puzzles: a Connoisseur's Collection, 2003.

Frederick Mosteller, 50 Challenging Problems in Probability, with Solutions, 1987. This is a classic book which anyone who is interested in probability should enjoy.

David Aldous has an interesting page of his reviews of many  non-technical books related to probability. You might enjoy thinking about the points made in his posting: Presenting probability via math puzzles is harmful.

Recreational mathematics

I will be speaking to the Adams Society of St John's College about my recent work on the Bomber Problem (on November 30th at 7pm in the Palmerston Room). This is one of my favorite unsolved problems in the field of Markov Decision Processes.

Schedules

Definition and basic properties, the transition matrix. Calculation of n-step transition probabilities.  Communicating classes, closed classes, absorption, irreducibility. Calculation of hitting probabilities and mean hitting times; survival probability for birth and death chains. Stopping times and statement of the strong Markov property. [5]

Recurrence and transience; equivalence of transience and summability of n-step transition probabilities; equivalence of recurrence and certainty of return. Recurrence as a class property, relation with closed classes. Simple random walks in dimensions one, two and three. [3]

Invariant distributions, statement of existence and uniqueness up to constant multiples. Mean return time, positive recurrence; equivalence of positive recurrence and the existence of an invariant distribution. Convergence to equilibrium for irreducible, positive recurrent, aperiodic chains *and proof by coupling*. Long-run proportion of time spent in given state. [3]

Time reversal, detailed balance, reversibility; random walk on a graph. [1]

Course outline

    1 Definitions, basic properties, the transition matrix
       1.1 An example and some interesting questions
       1.2 Definitions
       1.3 Where do Markov chains come from?
       1.4 How can we simulate them?
       1.5 The n-step transition matrix
       1.6 P^{(n)} for a two-state Markov chain

     2 Calculation of n-step transition probabilities, class structure, absorption, and irreducibility
       2.1 Example: a three-state Markov chain
       2.2 Example: use of symmetry
       2.3 Markov property
       2.4 Class structure
       2.5 Closed classes
       2.6 Irreducibility

     3 Hitting probabilities and mean hitting times
       3.1 Absorption probabilities and mean hitting times
       3.2 Calculation of hitting probabilities and mean hitting times
       3.3 Absorption probabilities are minimal solutions to RHEs
       3.4 Gambler's ruin

     4 Survival probability for birth and death chains, stopping times
       4.1 Survival probability for birth death chains
       4.2 Mean hitting times are minimal solutions to RHEs
       4.3 Stopping times
       4.4 Strong Markov property

     5 Recurrence and transience
       5.1 Recurrence and transience
       5.2 Equivalence of recurrence and certainty of return
       5.3 Equivalence of transience and summability of n-step transition probabilities
       5.4 Recurrence as a class property
       5.5 Relation with closed classes

     6 Random walks in dimensions one, two and three
       6.1 Simple random walk on Z
       6.2 Simple symmetric random walk on Z^2
       6.3 Simple symmetric random walk on Z^3
       6.4 *A continuized analysis of random walk on Z^3*
       6.5 *Feasibility of wind instruments*

     7 Invariant distributions
       7.1 Examples of invariant distributions
       7.2 Notation
       7.3 What does an invariant measure or distribution tell us?
       7.4 Invariant distribution is the solution to LHEs
       7.5 Stationary distributions
       7.6 Equilibrium distributions

     8 Existence and uniqueness of invariant distribution, mean return time, positive and null recurrence
       8.1 Existence and uniqueness up to constant multiples
       8.2 Mean return time, positive and null recurrence
       8.3 Random surfer

     9 Convergence to equilibrium for ergodic chains
       9.1 Equivalence of positive recurrence and the existence of an invariant distribution
       9.2 Aperiodic chains
       9.3 Convergence to equilibrium *and proof by coupling*

     10 Long-run proportion of time spent in given state
       10.1 Ergodic theorem
       10.2 *Kemeny's constant and the random target lemma*

     11 Time reversal, detailed balance, reversibility, random walk on a graph
       11.1 Time reversal
       11.2 Detailed balance
       11.3 Reversibility
       11.4 Random walk on a graph

     12 Concluding problems and recommendations for further study
       12.1 Reversibility and Ehrenfest's urn model
       12.2 Reversibility and the M/M/1 queue
       12.3 *The probabilistic abacus*
       12.4 *Random walks and electrical networks*
       12.5 Probability courses in Part II

     Appendix
     A Probability spaces
     B Historical notes
     C The probabilistic abacus for absorbing chains

You can send me email with comments or corrections on my lectures, the notes or the examples sheet. I am very glad to receive comments and suggestions from students studying the course.


Comments, Discussion and Questions

In the comment thread below you may comment, or ask questions, as regards anything that is connected with this course (what I said in lectures, what is on this page, in the course notes, or in the course blog).

If you subscribe to this comments thread you will receive an email whenever I make an announcement here that I have made a change or addition to the course notes.

blog comments powered by Disqus


This material is provided for students, supervisors (and others) to freely use in connection with this course. Copyright remains with the author.

University of Cambridge > Mathematics > Statistical Laboratory > Richard Weber > Markov Chains

tumblr site counter
Number of visits to this page since 20/10/11:

Richard Weber ( rrw1@cam.ac.uk )
Last modified: November 2011