Frank Kelly - f.p.kelly@statslab.cam.ac.uk
Jon Crowcroft: I think there is a huge gulf between what the mutliservice theory people are doing and what we can really try to deploy i nthe Internet and we didn't really get donw to that - what I mean by deployable technology
basically, to be deployable, the system has to
a) meet the current internet model (as per kevin hoadley's and bob day's presentation) i.e. be based on datagrams, and destination based packet routing
b) deal with a net where traffic is aggregated
c) work on a network where many access links have higher capacity than the long haul links
d) default to fair share
I know the theory ( e.g. Parekh's thesis on generalized processor sharing) says that we can do all the right things if we have all sourced policed by a leaky bucket, and all switches implement Fair Queueing, and then simply add weighted fair queueing....but that only works if you have control over all access routers/switches (or believe that all host software can be made to go via a leaky bucket instead of a TCP protocol source which always attempts to increase its share of a bottlneck!)
as Ian Leslie and I suggested, we _could_ do this by a very crude means - simply partition up the network by addresses, and implement the sharing _only at_ bottlenecks
but we have no theory to back this up, and it doesn';t scale at all anyhow......
of course this is all a short to medium term fix, until we deploy RSVP and then we have the telco scenario hopefully, EXCEPT that I don;t see how to deploy a system piecemeal......so there is another possible discussion - what are the economics of a mixture of a net as we have now, and a net as we would like in the future?
Piecemeal deployment sounds bad, but I wonder is it so bad? It is not at all clear to me (or, I suspect, to anyone) how close to end users any prices have to get. Some institutions may not disaggregate the prices any further, except perhaps for commercial users that are connected via that institution.
if there is congestion somewhere on the onward journey for some of these flows, we may simply get packet loss and have given them a share on the transatlantic link more than they actually can usefully use...
this is why i have a problem with starting in the mincut point in the net and moving back towards the sources with deploying fair (or unfair) shares... but, for some definition of better, I guess it is better than the current disaster!
1. If the bottleneck is priced (or shared fairly over institutions, or whoever pays the bill) then there is an incentive for users of the transatlantic link (or their institutions, or whoever pays the bill) to _not_ waste its capacity.
2. If a user needs to send multiple copies of a packet through the transatlantic link in order to get one copy through a later bottleneck, then I would say that the user is not wasting the capacity of the transatlantic link: rather the other bottleneck is wasting transatlantic capacity and the user is paying for the waste caused by this other bottleneck. The distinction arises since there is little the user can directly do about the waste. (Indirectly the user may apply pressure to the administration of the other bottleneck, and thus help the overall system to evolve!)
ps The cost to Demon of transatlantic capacity (45 Mb/s for 3 years for 15 Mpounds - Press release on Demon's transatlantic provision.), if I've done the sum correctly, corresponds to about 1 pound an hour for a 64kb/s connection, only slightly higher than the cost of an off-peak local telephone connection.
The calculation:
15 Mpounds per 3 years = 10 pounds per minute
45 Mb/s = 700 * (64kb/s)
thus Demon's contract with BT corresponds to about 85 pence per minute for a 64kb/s connection. Local weekend BT rates are about 60 pence per minute. Thus the Internet (when it can control congestion) should be able to give a transatlantic telephone connection to a Demon subscriber for an additional cost of about the usage charge the subscriber pays (offpeak) for his dialup connection.
here is some apples, ornages and onion comparisons:
of course, internet voice typically uses ADPCM (16kbps) and silence supression (a further 2-fold reduction) leading to around 8kbps average rate....5600 calls transatalantic
dominant internet apps like WWW are typically a web browse per 10 secs, which results in 11 packets, and around 2kbytes each way, or 1.6kbps, per active user....which is not terribly different - runs to about 3 times as many simulataneous WWW calls as voice transatlantic before congestion really sets in....
now demon have around 100,000 subscribers - they clearly estamate that >16% of use is non local - this would scare voice carriers - most voice traffic is far more local than that in my understanding!!
in the long long run, i personally believe that international 64kbps or even 2Mbps per person _should_ be a basic free service, and that only exceptional demand should be usage billed for.... and not judst in the universities, but to all citizens of civilised countries... [after al, what percentage of our taxes go towards books for students, or for school building maintenance, both of which can be replaced by cheaper net access to educational resources...]
however, i am quite happy that while there is an expensive resource (by the way, whats the cost of hiring a cable ship and 5000km of fiber), to bill - the commercial ISPs all have a number of this, and their business makes sense - as i've said, it would focus the attention of universities a lot on what they use internet access for if more than just the richer departments started paying the 10 pounds per month for demon 28kbps accss (or even the 9k per annum for BTNet or Pipex 64kbps access)
not the least, we have to pay a0 for deployment, and b) for BTs redundancy bills:-)
IMHO Demon manage with their subscription mechanism since they have peak rate policing (the modem speed - crude, but effective) and local telephone rates encourage users to log off (Demon don't get this usage charge of course, but it helps moderate user behaviour nonetheless - ironic that phone companies collect revenue for helping to moderate Internet usage!). I do think that users need some incentive not to blast away at peak rate for 24 hours a day - or else peak rates will be lower than is good for many applications. I think this argument applies even (and perhaps especially) when the costs and charges involved are very low. (Screen savers becoming active wall paper: a picture window giving a real time view of Pacific waves crashing onto the beach at...)
actually, this is one thing i didn;t make clear at the workshop - most companies with dial up modem access DO HAVE usaeg charges - this is to discourage "modem hogging" and is typically smallish compared with the per month internet access lease.....unless you stay logged on for hours a day...but this is simply a figment again of the non-shared line model the phone imposes - on cable tv nets, where channels to whole nerghbourhoods are shared to provide access, such time charges are not necessary .....
the other thing about human activities is that while one might "stay logged on", on the internet, silence suppression for audio, and motion detection in video do in fact result in very low rates most of the time.....
so i'm still not totally convinced...i think usage based charging is necessary, yes, but i see it as the exception - after all, in a competetibe market, surely the company that supports most users averagwe requirements most cheaply will be ahead - and if not charging for usage for avertage behaviour is one way of reducing lease costs then that would be an incentive for such a provider (they still have to police the calls, but only charge usage above the MCR _ as you said, in the ATM world, for a ABR or VBR call, you have 1 charge for minumum cell or mean rate, and another charge rate above that - we could make the minuimum or average across all subscribing users have a zero charge.....and the rate above that be something quite reasonable...
anyhow this is just tweaking the cosntants, and the basic principle of charging for access is something i definitely accept...