PREVIOUS ARTICLENEXT ARTICLE
COMMUNICATIONS
By 22 May 2020 | Categories: Communications

0

VIEWING PAGE 1 OF 1

By Keith Laaks, Executive Head - Technology at Vox 


Back when you had ADSL and tested your speed by downloading files, you often got very close to the speed of your line, be it 4Mpbs, 10Mpbs or 20Mpbs. You understood that the ‘missing Mbps’ was due to the internet protocol overhead that you need to account for, and were happy with your experience overall.


Now, a fibre infrastructure provider rolls out in your area and you decide to upgrade to a 100Mbps line. You eagerly run the same test again, and based on your past experience, you expect to see a download speed around the 95Mbps mark. But you are horrified to find that the download performance on that file is only around 35Mbps - no matter how many times you redo your test. 


You then turn to some of the speed-testing services available on the internet and the results there too don’t look right to you. Internet speed testing platforms are notoriously inaccurate and unreliable, yet too many people rely on them to test their online speeds, not really understanding the multitude of factors that can impact internet performance, only to be left disgruntled with their internet service provider (ISP). So what’s going on here?


Some of the misunderstanding can perhaps be traced back to the days of ADSL, when the line was the bottleneck and internet users became accustomed to upgrading the pipe to get better performance – typically going from a 1Mbps line to a 4Mbps to a 10Mbps line and so on. However, in the fibre era, where we are seeing pipes of 50 to 100Mbps, or even 1Gbps, the line is no longer the bottleneck.


Many speed testing platforms have their limitations and either use the incorrect methodology, or overlook important aspects that impact on the quality and performance of your connection to the internet.


Understanding capacity and throughput 


Firstly, it is important to understand the difference between capacity and throughput. Capacity - popularly known as bandwidth - is what ISPs sell, and it is a measure of the amount of data per second (Mpbs) that a communication link can transport. Throughput, on the other hand, is the actual amount of data that is successfully sent or received over the communication link.


Speed tests measure throughput and not capacity, and there are a number of constraints in the system that prevent throughput from ever reaching capacity levels. For instance, the architecture of some laptops – even if not connected to the internet via Wi-Fi, but to the router via a cable – is simply not designed to get to 1Gbps. In addition, the way it is set up, the CPU, the operating system, browsers and antivirus software all impact throughput speeds.


Another basic mistake that people make is performing a speed test over Wi-Fi, forgetting that the farther they are from their access point, the lower the link capacity. Consequently, as you move further away from your access point, Wi-Fi throughput also drops rapidly and becomes your overall limiting factor, especially on faster line speeds.


Choosing the right server and latency


Some speed testing services geo-locate users by their IP address and then automatically selects the ‘closest’ server when running a test. Unfortunately, their logic is somewhat broken. Instead of constraining their server selection to ensure it's connected to the users’ ISP network, they just choose any server, regardless of the ISP network it's connected to. You then end up with situations where a user in George (for example), whose fiber access circuit first touches his ISP’s network in Cape Town, runs his speed test to a small WISP’s server in George with only a 50Mbps internet connection. In this example, the customer will need to manually override the auto-selection and choose to instead run the test to a server in Cape Town, in order to get a more realistic result.


Yet, the bigger issue is that of latency, which is determined by the overall path (routers and interconnecting links) that IP packets and signals traverse, and is a huge factor in TCP/IP performance. TCP, part of the TCP/IP communications protocol, performs end-to-end error checking of data transmission for reliable and sequential exchange of data.


Back in the 1970s, when the protocol was invented, latency wasn’t an issue, but that’s all changed now, and the protocol itself is a limiting factor. Keep in mind that while the primary job of the internet’s routers and that of your ISP is to route IP packets, it’s the end-users’ devices and applications and content providers’ servers that chooses to use a protocol (TCP) that now constrains throughput performance to levels lower than the link capacity. There is nothing your ISP can do to fix this.


Bigger highway, but the same car


Think of it this way. Your car is constrained through its design, aerodynamics and engine performance to achieving a top speed of say, 200Km/h when driving down a road with a single lane. If we now upgrade that same road to four lanes, can your car now do 800Km/h? No. It can still only achieve 200Km/h. But you can now have four cars, each driving at 200Km/h driving down the road at the same time. The same situation occurs over fast internet links.


Most consumers are totally unaware of the above factors and do not understand that the ‘results’ they  get when using one of the many speed testing websites, will vary significantly based on server selection, the mathematical algorithm, process and protocol used to calculate the speed, browser or app technology being used, etc. It is important that internet users have at least a basic understanding of these factors, instead of reading too much into speed tests, which will almost always be misleading and inaccurate.


So, is there even any point in getting a higher capacity fibre link?


Actually yes. Many applications establish multiple concurrent connections to get round the limitations of the TCP/IP protocol. Youtube, for example, fires up 12 connections when you click on a video. As the number of user devices and use cases just keeps on increasing over time, links eventually become congested at certain times of the day and then users start experiencing the effects, such as video quality degradation or buffering, slow browsing or downloading, and more. 


On higher capacity links however, applications are much more responsive as they don’t step on one another and users can complete certain activities without having to wait as long as they would otherwise have to. 

VIEWING PAGE 1 OF 1

USER COMMENTS

Read
Magazine Online
TechSmart.co.za is South Africa's leading magazine for tech product reviews, tech news, videos, tech specs and gadgets.
Start reading now >
Download latest issue

Have Your Say


What new tech or developments are you most anticipating this year?
New smartphone announcements (44 votes)
Technological breakthroughs (28 votes)
Launch of new consoles, or notebooks (14 votes)
Innovative Artificial Intelligence solutions (28 votes)
Biotechnology or medical advancements (22 votes)
Better business applications (132 votes)