Internet latency is the time it takes for a request to go from a source to a destination on the internet. Latency is usually measured in milliseconds. For example, the current network latency from AWS Frankfurt to India is around 259 ms.
Internet latency has huge implications for the speed and user experience of online applications. Even though 259 milliseconds does not sound like much, its effect on the speed and usability of websites and applications is considerable. Every additional 100 ms of latency can potentially increase page load times by 35% to 75%. This is a huge difference when you consider that 87% of users abandon a video when it does not start immediately.
How to reduce latency in network?
To learn how to reduce network latency it is important to first consider where it comes from. Below we will take a look at the potential sources of latency and the ways in which it can be reduced.
Content delivery network
The most important factor that gives rise to internet latency is distance. The speed of communications over the Internet is limited. And as such the greater the distance between a website or application server and the end user the longer it will take to load that particular website or application.
A good way to overcome distance related network latency is to use a content delivery network (CDN). CDNs have a network of geographically distributed edge locations in close proximity to end users. By decreasing the distance that data has to travel CDNs reduce latency and by extension page load times. One shortfall of using CDNs is thier in-ability to handle dynamic content delivery.
Anycast DNS allows DNS queries to be routed to the topologically nearest DNS server, resulting in reduced network latency and quicker DNS query responses.
Once your query has been resolved into a unique IP address, anycast BGP takes over and routes your request to the topologically nearest web server. Anycast BGP again has the advantage of reducing the distance that requests have to travel leading to lower latency.
Internet bottlenecks are comparable to real life highway traffic bottlenecks. When a certain point on a highway has more incoming traffic then outgoing traffic, it tends to get congested and leads to delays. In the same way if a network router has more data packets coming in then going out, it leads to queing delays and an increase in latency.
Monitoring your network to identify potential network bottlenecks can be helpful in reducing internet latency. Tools like the AWS latency map can be used to test network latency to different IP prefixes.
Network monitoring is a good strategy to get in front of potential network problems. However, network monitoring can only take you so far. Once a network problem like high latency has been identified, network engineers have to go ahead and make manual changes to network topology. Network monitoring can also end up being reactive in nature.
Network optimization tools go one up on network monitoring solutions by pro-actively automating the process of network optimization.
Network optimization tools are a good bet to avoid network congestion and reduce network latency. Network optimization tools are built around the concept of smart internet routing by routing around internet traffic bottlenecks. They generate a real-time performance map of the internet and like a smart GPS device route internet traffic over network paths with the lowest latency. Network optimization solutions are not just limited to reducing internet latency but also reduce packet loss, jitter and optimize bandwidth.
To learn more about network optimization, download the AWS Network Optimization Whitepaper.