top of page
  • Writer's pictureRichard Harvey

Silly old buffers

When I was growing up the word "buffer" had two meanings: a fellow usually a rather bumbling fellow (a more affectionate and milder version of what young people call "gammon" nowadays) and the "buffer" as in the Chief Petty Officer on a ship. It is this latter definition which points towards the modern use of buffer which as an interface between two things. In the case of the Navy the buffer is the interface between the officers and the ratings. In the case of digital electronics and computing, the buffer is the interface between two devices which may wish to run at different speeds.

Imagine you are in a line, or a queue, at a restaurant. The line is a buffer -- in this case it is a buffer between a faster thing which is the rate at which people are arriving at the restaurant and the slow thing which is the rate at which the maître d'hôtel can seat people. The line is very visible and eventually it is so long that you might decide to take your custom elsewhere. But when you are seated there is another buffer which is that between taking your order and the kitchen which can only produce food a certain rate. That buffer - the kitchen buffer - is hidden and is managed by the waiting staff and the kitchen, usually with the offer of free drinks if there are unacceptable delays.

The internet has been intimately associated with buffers since its inception. One of the pre-cursors to the internet, ALOHANET, sacrificed full-duplex operation (duplex refers to the ability to simultaneously talk and listen) for the want of an extra memory buffer. The reason [1] was that another buffer would have cost $300 (around $2200 in 2021). Fifty years, on the developments of ALOHANET are now being used by everyone for all sorts of purposes from surfing the web to singing in groups. And it is that last activity, singing in groups, which has exposed the achilles heel of the internet: latency.

When you login to buy a new broadband connection you are frequently encouraged to buy more bandwidth. Although historically bandwidth meant something else, nowadays it means the amount of data one might be able to send in a fixed amount of time - usually measured in hundreds of megabits per second. In practice to measure the bandwidth you might run a bandwidth checker which will lookup your internet address and make a guess of the location of a nearby server. The bandwidth checker then runs on your local machine and sends blocks of data back and forth between your machine and the server. And after a couple of minutes, it a takes a few minutes to average out the local fluctuations, you get a result.

But, hang-on, almost certainly your bandwidth is vastly in excess of the data you need to send. Of course it is nice if that terrabyte backup completes in minutes but in terms of what you need which is voice, music and films, there is plenty of bandwidth. Super, so when you get together with your mates on zoom for a choir practice or music it should all work fine. But it's a complete mess, no-one seems to be able to come in at the right time. The problem is that there are delays in the system, or latency, and, worse, each user gets a different delay so your rendition of Handel's Creation becomes chaos.

The problem is buffers and in some case ginormous buffers in the modem supplied to you by your internet service provider. Buffer memory is now so cheap that fitting oversize buffers is de rigeur. Such large buffers fool your machine into thinking that your connection has very high bandwidth. That might fool some bandwidth checkers but once the buffer is full nothing else can get through.

To return to our restaurant analogy. Let's imagine a busy restaurant that is serving sit down customers and take-outs. There is some flexibility with sit-down customers as they are prepared to wait for 20 minutes while they have a drink at their table, marvel at the decor etc. Take-out customers have been told to arrive a specific time as hot food cools very quickly on the seat of a car. There is one queue. You arrive at the queue hoping for a sitdown meal. The queue is short and it is moving quickly, "Come in , come in" says the welcoming maître d'hôtel. The bandwidth looks high. Once inside you are routed through the restaurant to another queue -- this is the queue to be seated. This hidden queue is moving slowly. Soon it fills to gargantuan proportions. Meanwhile the maître d'hôtel is stacking people into the queue. Worse, take-out customers are joining the queue. They need immediate service but are stuck behind the sit-down customers. In this analogy the sit-down customers are regular packets of data and the take-out customers are Voice-over-IP or VoIP packets (the internet labels these packets as urgent but if they are stuck in large buffers then no-one knows they are there).

In the internet this situation is exacerbated by the "ip sawtooth" which is a system designed to make the maximum use of available bandwidth. In a nutshell the algorithm is send packets faster and faster until you lose some then halve the rate. The presumption is that packets get lost due to congestion (other users) and, if you had sole use of a link and suddenly you have congestion then another user has joined so you need to halve the bandwidth. In a our restraunt analogy the maître d'hôtel is giving no signs of congestion so the number of customers arriving every minute would increase every minute.

What if our restaurant had one queue and it was visible outside the restaurant [4]. If the queue got too long the maître d'hôtel closed the queue. This is called "tail drop" Tail drop signals to customers and to the internet that this buffer is full and we should slow down. As the queue is now short, the take-out customers (our VoIP packets) have only a small delay the latency drops and everything works again. What I've just described is bufferbloat and you can read more about it [2] or there is an amusing lecture by Dave Taht that replaces packets with network engineers below [3]. Either Dave Taht or Jim Gettys named these hidden queues the dark buffers of the internet. They have been known about for a while but it is taking a surprisingly long time for ISP to roll out fixes. Possibly some of the less reputable ISPs see buffers as a way of beating the internet speed meters but on you can find a speed meter that measures latency and the results are not pretty! So, if they are thinking that large queues make their ISP service look good, then they are indeed silly old buffers.

  1. Development of ALOHANET, Norman Abramson, IEEE Transactions on Information Theory, Vol IT-31, No 2, March 1985, pp 119--123.

95 views0 comments

Recent Posts

See All


Post: Blog2_Post
bottom of page