Latency is about the speed of the Internet. Content providers focused on web performance are on the forefront of low network latency innovation.
We define network latency as:
“The time it takes for a ping to reach and individual Internet node and return to its point of origin. This ping is reported as round trip time (RTT) in milliseconds (ms).”
Real application of network latency is a big issue. It has the largest impact for professionals looking to optimize performance. High network latency can cause poor application performance and poor web performance. This can lead to poor customer experiences and lower conversion rates. In the world of Wall Street, it can mean millions of dollars.
Taking the definition further, we are going to discuss latency. The goal is low network latency.
Low network latency means decreasing the amount of time it takes for data to move from one location to another. In network latency, geographical distances and the speed of light are the limiting factors.
Low network latency vs. high network latency is the difference between good and bad performance. Low network latency means good web or application performance. High network latency means poor web or application performance.
What is the Internet Speed Benchmark?
Before we begin discussing what low network latency is, we need to understand our benchmarks. From a limitation perspective, data cannot move faster than the speed of light. Thus, our benchmark is the speed of light. The reference point is “20ms [Round Trip Time] RTT is equal to ~3000km, or an 1860-mile radius for light traveling in vacuum”.
Looking at 20ms as our baseline number, which we cannot move faster than, we can begin a meaningful discussion. As Google currently runs about 100ms of network latency, the goal is to get to and improve this number.
The Bottleneck is Network latency
A big concern for application performance over a distributed team or a cloud platform is network latency. There are many factors to consider when looking at application performance. Running on a distributed system, network latency becomes the bottleneck to performance.
Why is Network Latency the Bottleneck?
Network latency, in short, is more difficult to optimize. It is more difficult to optimize than bandwidth. Bandwidth is the more advertised performance metric. This places network latency as the bottleneck. It results in the more difficult conversation.
Network latency matters to industries ranging from FinTech to gaming. There are many industries in between. The reason network latency matters is it effects the deliverability of content. The deliverability of content effects web performance, application performance, and data transfer capabilities.
When building an application and designing performance, it is difficult to predict network latency. This is where the problem arises. There is an expectation of how an application is to run. Unless there is predictable network latency, the result could be packet loss or congestion. This becomes the DevOps teams trouble.
Network latency is more than just a term to define. It is a problem worth solving and a metric worth optimizing. It will increase web and application performance. It also provides developers a more consistent network to rely upon for developing applications.