Pricing Internet Resources

An (edited) email conversation between Jon Crowcroft and Frank Kelly

April 1996

Abstract:

This is an edited version of an email correspondence between Jon Crowcroft and Frank Kelly, following the PRICE meeting on 19 April (on Pricing Resources in the Internet and other Cyber Economies).

Addresses:

Jon Crowcroft - J.Crowcroft@cs.ucl.ac.uk

Frank Kelly - f.p.kelly@statslab.cam.ac.uk


The conversation

Jon Crowcroft: I think there is a huge gulf between what the mutliservice theory people are doing and what we can really try to deploy i nthe Internet and we didn't really get donw to that - what I mean by deployable technology

basically, to be deployable, the system has to

a) meet the current internet model (as per kevin hoadley's and bob day's presentation) i.e. be based on datagrams, and destination based packet routing

b) deal with a net where traffic is aggregated

c) work on a network where many access links have higher capacity than the long haul links

d) default to fair share

I know the theory ( e.g. Parekh's thesis on generalized processor sharing) says that we can do all the right things if we have all sourced policed by a leaky bucket, and all switches implement Fair Queueing, and then simply add weighted fair queueing....but that only works if you have control over all access routers/switches (or believe that all host software can be made to go via a leaky bucket instead of a TCP protocol source which always attempts to increase its share of a bottlneck!)

as Ian Leslie and I suggested, we _could_ do this by a very crude means - simply partition up the network by addresses, and implement the sharing _only at_ bottlenecks

but we have no theory to back this up, and it doesn';t scale at all anyhow......

of course this is all a short to medium term fix, until we deploy RSVP and then we have the telco scenario hopefully, EXCEPT that I don;t see how to deploy a system piecemeal......so there is another possible discussion - what are the economics of a mixture of a net as we have now, and a net as we would like in the future?


Frank Kelly: I think we _do_ have the theory to back this up - and I think this fix could start an evolution in a good direction, towards the sort of solution (based on leaky bucket policers nearer to the sources) suggested by the longer term research. At the moment the transatlantic bandwidth appears to be by far the most acute bottleneck: it may be possible to get a decision to price (or fair share) transatlantic bandwidth while it may be impossible to get any decision on a more general approach. But if transatlantic bandwidth is priced (or fair shared), then this would provide an incentive for an institution to police its own use of transatlantic bandwidth, and there are many local mechanisms which could be explored without the need for central coordination.

Piecemeal deployment sounds bad, but I wonder is it so bad? It is not at all clear to me (or, I suspect, to anyone) how close to end users any prices have to get. Some institutions may not disaggregate the prices any further, except perhaps for commercial users that are connected via that institution.


Jon: one of the problems with this is that it shares the single bottleneck ecvenly - however, the sert of traffic flows are destined for a whole range of onwards destinations spread out throughout the US (and on into australia , hawaii, japan etc etc) -

if there is congestion somewhere on the onward journey for some of these flows, we may simply get packet loss and have given them a share on the transatlantic link more than they actually can usefully use...

this is why i have a problem with starting in the mincut point in the net and moving back towards the sources with deploying fair (or unfair) shares... but, for some definition of better, I guess it is better than the current disaster!


Frank: I guess it depends what "waste its capacity" means.

1. If the bottleneck is priced (or shared fairly over institutions, or whoever pays the bill) then there is an incentive for users of the transatlantic link (or their institutions, or whoever pays the bill) to _not_ waste its capacity.

2. If a user needs to send multiple copies of a packet through the transatlantic link in order to get one copy through a later bottleneck, then I would say that the user is not wasting the capacity of the transatlantic link: rather the other bottleneck is wasting transatlantic capacity and the user is paying for the waste caused by this other bottleneck. The distinction arises since there is little the user can directly do about the waste. (Indirectly the user may apply pressure to the administration of the other bottleneck, and thus help the overall system to evolve!)


Jon: right - settlements, of course, between ISPs could be based on loss statistics...
Frank: Your proposal for subscription based pricing : if the price is usage insensitive, why would a user with access to several priority levels ever use other than the highest priority level?

ps The cost to Demon of transatlantic capacity (45 Mb/s for 3 years for 15 Mpounds - Press release on Demon's transatlantic provision.), if I've done the sum correctly, corresponds to about 1 pound an hour for a 64kb/s connection, only slightly higher than the cost of an off-peak local telephone connection.

The calculation:

15 Mpounds per 3 years = 10 pounds per minute

45 Mb/s = 700 * (64kb/s)

thus Demon's contract with BT corresponds to about 85 pence per minute for a 64kb/s connection. Local weekend BT rates are about 60 pence per minute. Thus the Internet (when it can control congestion) should be able to give a transatlantic telephone connection to a Demon subscriber for an additional cost of about the usage charge the subscriber pays (offpeak) for his dialup connection.


Jon: um - its niot a subscription per user - it would be a subscription for some number of users from an organisation at any one time to get access at the higher rate within the organisation, the allocation of actual user to address would be more dynamic....and could involve time based charges etc... or tokens or whatever..

here is some apples, ornages and onion comparisons:

of course, internet voice typically uses ADPCM (16kbps) and silence supression (a further 2-fold reduction) leading to around 8kbps average rate....5600 calls transatalantic

dominant internet apps like WWW are typically a web browse per 10 secs, which results in 11 packets, and around 2kbytes each way, or 1.6kbps, per active user....which is not terribly different - runs to about 3 times as many simulataneous WWW calls as voice transatlantic before congestion really sets in....

now demon have around 100,000 subscribers - they clearly estamate that >16% of use is non local - this would scare voice carriers - most voice traffic is far more local than that in my understanding!!


Frank: I feel a point to be drawn out is that an 8kb/s or 64kb/s transatlantic connection is very cheap by comparison with phone charges, but not so cheap that academic institutions could reasonably be given it free.
Jon: it depends, as you say, on whether unbounded access creates an unbounded demand for capacity, and what , if there is a limit to demand, the cost per user is...

in the long long run, i personally believe that international 64kbps or even 2Mbps per person _should_ be a basic free service, and that only exceptional demand should be usage billed for.... and not judst in the universities, but to all citizens of civilised countries... [after al, what percentage of our taxes go towards books for students, or for school building maintenance, both of which can be replaced by cheaper net access to educational resources...]

however, i am quite happy that while there is an expensive resource (by the way, whats the cost of hiring a cable ship and 5000km of fiber), to bill - the commercial ISPs all have a number of this, and their business makes sense - as i've said, it would focus the attention of universities a lot on what they use internet access for if more than just the richer departments started paying the 10 pounds per month for demon 28kbps accss (or even the 9k per annum for BTNet or Pipex 64kbps access)

not the least, we have to pay a0 for deployment, and b) for BTs redundancy bills:-)


Frank: My guess is that even (perhaps especially?) in the long long run most users will have high peak to mean ratios - because they use the network only occasionally throughout the 24 hours of the day, because they get better quality from applications that produce bursty traffic, and because of heterogeneity of applications.

IMHO Demon manage with their subscription mechanism since they have peak rate policing (the modem speed - crude, but effective) and local telephone rates encourage users to log off (Demon don't get this usage charge of course, but it helps moderate user behaviour nonetheless - ironic that phone companies collect revenue for helping to moderate Internet usage!). I do think that users need some incentive not to blast away at peak rate for 24 hours a day - or else peak rates will be lower than is good for many applications. I think this argument applies even (and perhaps especially) when the costs and charges involved are very low. (Screen savers becoming active wall paper: a picture window giving a real time view of Pacific waves crashing onto the beach at...)


Jon: evidence from measuring our multimedia apps is that with adaptive compression in the senders, there is in fact a very small range of acceptable quality for audio, video, and other apps.....the limits are set by human perception - for video, around 2Mbps (fairly independent of compression algorithm - whether vector quanitising, DCT, or even fractal based), scene, and so forth, and for audio, somewhere in the range 16-300kbps, give the sorts of qualities people seem to like (for entertainment, business or casual...) _ also, 64kbps gives around the sort of timelieness for typical WWW and NFS file access....now the key thing here is that we actually have a deployed net with capacity for much more than 64kbps per person for several hours per day - the phone net is way higher than this (call blocking probabilities are infinitessimal withi nthe UK now, so i can only assume that the peak call rate that is sustainable is pretty huge - and i've been told the limit is not line capacity, but call signalling protocol handling i nthe exchange - so a simple,r system like internet packets instead of call set up should be easy to tolerate...)

actually, this is one thing i didn;t make clear at the workshop - most companies with dial up modem access DO HAVE usaeg charges - this is to discourage "modem hogging" and is typically smallish compared with the per month internet access lease.....unless you stay logged on for hours a day...but this is simply a figment again of the non-shared line model the phone imposes - on cable tv nets, where channels to whole nerghbourhoods are shared to provide access, such time charges are not necessary .....

the other thing about human activities is that while one might "stay logged on", on the internet, silence suppression for audio, and motion detection in video do in fact result in very low rates most of the time.....

so i'm still not totally convinced...i think usage based charging is necessary, yes, but i see it as the exception - after all, in a competetibe market, surely the company that supports most users averagwe requirements most cheaply will be ahead - and if not charging for usage for avertage behaviour is one way of reducing lease costs then that would be an incentive for such a provider (they still have to police the calls, but only charge usage above the MCR _ as you said, in the ATM world, for a ABR or VBR call, you have 1 charge for minumum cell or mean rate, and another charge rate above that - we could make the minuimum or average across all subscribing users have a zero charge.....and the rate above that be something quite reasonable...

anyhow this is just tweaking the cosntants, and the basic principle of charging for access is something i definitely accept...


f.p.kelly@statslab.cam.ac.uk