Would you like to run a global network latency test for your website or application to individual network end points? Well, here is your answer.
Why is network latency important?
Network latency is the time it takes requests to go from one point to another on the internet. It is a very important indicator of the speed of your website or application. High network latency invariably leads to longer page load times and sluggish applications: videos take longer to load and buffer, online games skip around, lose frames and do not respond to user inputs instantaneously, search queries and checkout processes take longer to complete, VOIP calls suffer from jitter and cutoffs. High network latency has been shown to have very real business consequences for web services and applications.
End users do not like slow. Measuring, monitoring and optimizing network latency therefore is of paramount importance if web services are to guarantee a great end user experience.
There are many tools out there that can test network latency. Most of them however measure end to end network latency between say a client computer on one end and a web server on the other. Measuring network latency in this way can be beneficial to individual end users who want to know the network latency their particular end point on the network is suffering from. Online gamers are particularly interested in measuring latency from their location to the game server. Wikihow has a very detailed article describing how to test latency from a client or end user to a web server.
Web services and applications however need a completely different paradigm of testing network latency. Most websites see internet traffic from millions of end points on the network, which correspond to geographically distributed individual end users. All these end users see varying amounts of latencies because of the different distances between their computers and the web server. For network latency tests to be any good to these services, all these end point latency measurements have to be collected, aggregated and analyzed. Once a comprehensive network latency map, incorporating the different network latencies to various end points around the world has been generated, a policy to minimize latencies can be put in place.
How to run a network latency test
This is where the Datapath.io network latency map comes in. The latency map is a valuable test tool for service providers. The Datapath.io system runs continuous network latency tests from the major cloud and baremetal setups to individual prefixes over multiple upstream transit providers.
The internet comprises upwards of 600,000 (and growing) network prefixes which encompass every geographic region worldwide. Once the network latency from the application web server to the individual prefix is known it is a small step to compute the latency to the individual user, sitting at home.
The datapath.io latency map gives service providers granular IP level visibility into the latencies their networks are suffering from. This visibility is not just limited to a single cloud or upstream transit provider. It encompasses major cloud providers including AWS, google cloud engine, Rackspace and digital ocean as well as major upstream transit providers like Cogent, Hibernia and Level3. This allows web services and applications to determine their network latency to end users all over the globe, over almost all network routes.
Reducing Network Latency
Monitoring network latency however, is not the end of the story. Once the latency map is analyzed to pinpoint the areas or prefixes suffering from high network latency, mitigating its effects are the next logical step. Datapath.io allows service providers to reduce network latency by up to 4 times by re-engineering the Border gateway protocol. The Border gateway protocol is responsible for making routing decisions on the internet. In most cases however, it routes internet traffic over the shortest path which results in congestion buildup in network pipes. Datapath.io avoids these congested network routes by making routing decisions based on which routes have the lowest network latency rather than distance.