Light at the End of the Tunnel: Introduction to Network Latency Part 2

August 29th, 2015 by

As we reviewed in our first post, there are some interesting challenges in the ability to reach the theoretical speed limit when we talk about latency across a network. Our example we have is two corporate data centers which are located in New York and Seattle. Let’s call them NY01 and SEA01 for ease of naming.

map

We know that we have a few constraints that create latency already which are:
– Distance (5000 km)
– Jitter – that’s the bouncy castle effect
– Data packet size

Packets, Headers and Encapsulation, Oh My!

A network packet is not a data packet. It contains a data packet encapsulated as the datagram portion of the network packet. In other words, there is a bunch of overhead within the network packet that allows the network to understand some key information about it:

packet_format2_EtherPeek

Many networks are configured for an 1500 Byte MTU (Maximum Transmission Unit) for compatibility across the data path, but some can bump up to what we call Jumbo Frames which can support 9000 Bytes.

Within the header we have the source and destination along with many other pieces of information in the network packet which can be easily translated to the way that an addressed envelope gets from New York to Seattle.

This is where the fun comes in. We talk about bandwidth in the context of the total available size of data that can travel across the network over a period of time. This is represented as a metric of Mbps (Megabit per second) or Gbps (Gigabit per second) which needs some initial thought in the context of our data we have to send.

Data speed is the measurement of total amount of data we can send, with latency applied to the metric as well:

Data speed (bps) = 8 * block size / (distance/299792 + 8 * block size / bandwidth )

So if we use an example of a 100 Mbps connection, with 5000 km between NY01 and SEA01 we can work out some numbers. Using and a 1500 byte MTU, we end up with a data speed of 714362 bps.

No Bunnies or Beer, Just Hops

We’ve figured out our throughput, not accounting for natural jitter (aka the bouncy castle), but this gives the ability to move to our next logical challenge that will affect the network performance: hops.

You may already know that when you ping a network attached device, it sends a sequence of 64 Byte ICMP packets across the network to discover the path and tell you the response time:

ping

If we go one step further and trace the route to that destination, we will see that there area a few turns along the way:

traceroute

This is what we talked about with the fact that not only are networks not a straight line, but they also travel across a number of routers, switches, and various pieces of network infrastructure. We aren’t even talking about the time it takes for the NSA to sniff the packet 😉

Based on this example, we can see that there are 12 hops from the source to the destination. At each hop, a decision is being made by the network infrastructure to choose the path, one hop at a time, until it reaches the destination. There is a limit of hops allowed in the network path but luckily the fully meshed network across the network provider’s infrastructure ensures that nothing has to exceed that.

At every hop, the header of the packet is stripped off, read, reattached with a new source (because each hop re-establishes the source/destination path), and is sent along its merry way. We have made significant ground on the reduction in time it takes to make these decisions. We do have to know that no matter how good that intelligence gets, there is a delay which is greater than zero milliseconds. Think of it as a few pit stops along a road trip.

What Does This Mean?

The end result is that we have to understand the difference between theoretical limits of the network and the actual performance. The other thing to remember is that bandwidth is not throughput. Throughput is the measurement of available bandwidth with latency applied to the trip.

As we can see from this, there is a surprising amount of potential for degradation in the overall trip time due to unavoidable physical and logical elements in the data path. This doesn’t signal a failure in the design, but they must be accounted for. This increases the importance of understanding limitations and constraints in networking in the context of application performance. There are many factors that are continuously fluctuating that will affect the application and will drive decisions on your application architecture.

Image sources: http://www.wildpackets.com/resources/compendium/ethernet/ethernet_packets , screenshots

Leave a Reply

Your email address will not be published. Required fields are marked *