During the past two years, I read for a Master of Science degree from the Queen Mary University of London by distance-learning. The degree was lecture-based but included a final dissertation on a related subject.
My research project delved into Buffer De-Bloating and was aimed at studying a common problem in the internet today caused by excessively large buffers, its effect on internet performance, and recommendations at mitigating it.
Buffers are the physical storage medium present in most telecommunications devices, which are essential for the proper functioning of a network. The role of the buffer is to store data received by the device until it is processed, or to store data to be transmitted by the device until its turn to be sent. One of their main purposes is to enable the interaction of two links or networks which operate at different data rates. In every statistically multiplexed network, data bursts will occur, and these will result in short periods of time where the rate at which the data is received is much larger than the rate at which the data can be sent. The buffer holds this received data until it can be processed, to avoid it being lost and thus causing an eventual re-transmission.
Since memory has today become cheap, manufacturers have inadvertently overdone buffering in many devices. The problem is that while buffering reduces data loss, excessive buffering is resulting in high latency and reduced throughput when closed-loop transport protocols like TCP are used. The operation of congestion-avoiding protocols such as TCP relies on timely congestion notification (i.e. packet losses) to regulate their transmission speeds. Large buffers postpone the feedback of these congestion notifications from informing the transmitter to reduce its transmission rate, thus causing an increase in latency. This phenomenon is referred to as “bufferbloat”, and is one of the main causes of latency on the Internet today. The increased use of interactive applications on the internet has increased the importance of reducing latency and jitter, and has propelled the “bufferbloat” problem to the fore-front of Internet researchers’ agenda.
My research investigated the suitability (or lack of) of existing Active Queue Management (AQM) techniques in mitigating the “bufferbloat” problem. Major real-life networks deployed in Malta were tested to confirm the presence of this problem, and computer simulations were run to compare the effect of different AQM techniques in different scenarios. Different network traffic conditions under which self-tuning AQM techniques like Controlled-Delay perform better than classic techniques like Random Early Detection, together with the limitations of both types of AQM were identified, and recommendations appropriate for today’s networks were provided as a basis for future research.
The degree was carried out following the award of a STEPS Scholarship, which scholarship is part-financed by the European Union – European Social Fund under Operational Progremm II - Cohesion Policy 2007-2013, "Empowering People for More Jobs and a Better Quality of Life". This scholarship scheme, launched back in 2009, has certainly enabled hundreds of students like myself to maximise our potential and be better equipped for today's dynamic work environment. A word of thanks goes to the staff at Queen Mary University who were always helpful and whose guidance was essential in reaching a positive final result. And finally, thanks to my parents, my girlfriend, my family and friends, for their support and for always being there during the difficult and strenuous periods.
Comments
Post a Comment