QoS Challenges with VPNs

Something that comes up regularly are questions regarding QoS on VPN’s. There are several challenges related to QoS in the typical Internet connected environments that I come in contact with. These challenges are not really a result of the VPN configuration, but it is often mission critical traffic that we are trying to prioritize through the VPN. This traffic is competing with other Internet destined traffic. This article is an attempt to outline some of the challenges that we typically run into. We will look at what we can do and outline the things that are beyond our control. This article will not get into the specific configuration of the solutions, but will be a baseline for more technical articles.

The first thing that I want to mention is that QoS is really a suite of technologies. We often discuss features like marking, policing, fragmenting, queueing and shaping. When most people mention QoS to me, the specific feature they are talking about is usually some form of priority queuing. This basically means that when traffic starts backing up, high priority traffic will go out the interface first. This is not a solution for long term oversubscription of a circuit or service provider contract rate, but can help overcome issues for specific applications when short bursts of traffic cause momentary oversubscription.

In order to get a clear picture of some of the challenges, let’s look at a typical diagram of two sites. This diagram represents two offices that are each connected to a low cost cable modem service provider with a 5Mb/s download and a 1Mb/s upload.

One thing worth mentioning is the theoretical maximum throughput between these two sites. Since each location only has 1Mb/s upload, the maximum possible site to site performance is 1Mb/s. That is Mb with a little “b” representing bits as opposed to MB where the large B represents “Bytes”. File sizes are usually represented in Bytes, but communication is typically represented in bits. Additionally, the 1Mb/s site to site performance is an overall representation of the throughput. As network engineers, we know that there is overhead in the form of transport protocols and headers. Additionally, we must keep in mind that traffic may also be competing with other Internet destined traffic or traffic destined to other VPN destinations.

The next thing that is very important to realize is that we really have no control of how traffic is prioritized once it leaves our network. We can use DSCP markings in IP packet headers, but transit ISPs may remark these headers or ignore the markings altogether. So at the end of the day we can’t really have end to end QoS until ISPs all agree to fully comply with consistent per-hop behavior (which isn’t likely to happen for various reasons). In reality, we have very little control once the packet leaves the last device we control.

Looking back at our diagram, we can see that there are two separate local service providers. Additionally, there is an unknown number of transit service providers represented by the dark blue area of the cloud. We know we can’t have end to end guarantees, but we need to work with what we have. Assuming there is some high priority traffic that needs consistent access to bandwidth between site A and site B, we could place the desired traffic into a priority queue.

The general concept of queuing is storing packets when the interface is saturated. The queue is then used to keep the transmit pair on the interface busy until it is emptied. If certain traffic is defined to be priority, it goes to the front of the line during these periods of congestion. Looking back at the drawing once again, we should realize we have a problem with this. We have a contract rate with our service provider of 1Mb/s upload. The problem with this is the fact that we are probably not using a 1Mb/s interface to connect to the cable modem.

Regardless of whether this is an ASA, IOS or some other device, the line rate is most likely 100Mb/s or greater. Unless we are actually sending traffic faster than the line rate, queuing does not occur in our device. Since TCP slows down when packets are dropped, that speed will most likely never be reached. So even though a priority queue could be configured, it would probably not have any effect. Another thing worth mentioning is that we can’t really prioritize inbound traffic coming into our device, the device we control is on the wrong end of the circuit or link to do that.

In our diagram, our equipment is sending traffic at the line rate of the connection between the device under our control and the cable modem. Somewhere in the service provider network (most likely in the cable modem), the ISP is throttling us back to the contract rate. Therefore, the service provider equipment is what would have to do the queuing but this creates another problem. This would place a device that is outside our control with the responsibility of reordering packets to meet our business and technological challenges. What we really need to do is move this function of reordering traffic into our device, but our device is too fast and is never queuing packets. Without a queue of packets, our device sees no reason to reorder traffic.

The solution to this is a feature called hierarchical priority queuing. The first step in hierarchical priority queuing is creating a traffic shaping queue that actually limits all traffic to something equal to (or even slightly less than) the contract rate. The reason I mentioned slightly less is due to the fact that this is not an exact science. The way the contract rate is calculated may not transfer exactly to the line rate calculations of the traffic shaping configuration on our device. Basically creating this traffic shaping configuration allows us to move the bottleneck of the network to the device that we control. Anything that our device ultimately sends should be within the contract rate and would only be dropped if the service provider has congestion.

Once we have a moved the egress bottleneck to our router by using a traffic shaper, we can specify the desired traffic as priority. This allows those packets to be moved to the front of the line during times of congestion and allows us to see traffic that is causing congestion based on our contract rate. It is worth noting that his does not really change the electrical properties and the modulation of the TX pair on our device. It is still a 100Mb/s interface, it is only sending traffic 1% of the time.

Although this solution will not guarantee end-to-end packet handling, it does address the area in the network where packets are most likely to be discarded. Packets arriving at 100Mb/s or 1000Mb/s stand a strong possibility of being discarded as the connection is stepped down to 1Mb/s. If we can control this and prioritized the desired traffic, the priority traffic has a much greater chance of reaching the final destination. Service providers can certainly experience congestion and drop traffic, but it is less likely than when the speed is being drastically reduced by the cable modem, DSL modem or other PE device. In future articles, we will look at the technical details of configure hierarchical priority queuing on the ASA and prioritizing traffic through a VPN.

About Paul Stewart, CCIE 26009 (Security)

Paul is a Network and Security Engineer, Trainer and Blogger who enjoys understanding how things really work. With over 15 years of experience in the technology industry, Paul has helped many organizations build, maintain and secure their networks and systems.
This entry was posted in Design and tagged . Bookmark the permalink.

One Response to QoS Challenges with VPNs

  1. Pingback: How to Implement Priority Queuing on the ASA | PacketU

Comments are closed.