What WAN Can Learn from TSA
Director of Solutions, VeloCloud
April 20, 2015
I was at the airport last weekend and, unsurprisingly, there was a long line at security screening. As most air travelers would agree, we all want to get through security screening as fast and as painless as possible.
Just to give you a picture of security screening at airport, there were quite a few different security screening lanes, TSA Pre, Premium, airline crew, and regular passengers. As I waited in line, I saw a TSA officer efficiently redirected us poor souls waiting in line to different lanes. She monitored each lane and told the next person which lane to go based on its availability and the number of travelers in a particular lane at that time, and used the Premium lanes when they were free. I got through the screening in no time.
Just like multiple security screening lanes at the airport, today’s enterprises already utilize multiple WAN links. The security screening lane management process reminded me of ECMP, a well known technique for WAN link load balancing. When there are multiple next hops for the same destination, the WAN router employing ECMP uses pre-defined algorithms, round-robin or by hashing packet headers, to determine which next hops or WAN links to use. ECMP is typically done per flow (default behavior in Cisco CEF) so its effectiveness very much depends on the traffic flows and the load sharing algorithm. And for its limitation, if you want to transfer large data in one flow, e.g. backup, per-flow load balancing will not help.
Why not per-packet load balancing with EMCP? Cisco talked about potential issues and Ivan Pepelnjak also discussed them here. In short, blindly sending packets of the same flow down multiple paths that have totally different characteristics, bandwidth, packet loss, and latency, will cause more harm than good. Worse, these WAN characteristics change in real time. It will cause out-of-ordering and re-transmission which severely impact application performance.
However, per-packet load balancing, if done properly, can really take advantage of multiple WAN links, and boost the transfer speed of even just a single application flow. Imaging having a faster backup to cloud and faster file sharing as a result. Just like what the TSA officer did by monitoring each security lane and sending each traveler down the right lane so that everyone went through as fast as possible, the per-packet load balancing must know its traffic and real time conditions of each of the WAN links.
The ideal per-packet load balancing must first recognize the application within the traffic flow. It should perform load balancing only for applications that can benefit from higher transfer rate, e.g. backup or file transfer applications. Real-time link conditions are taken into consideration and compensated for the difference. It must also ensure in-order packet delivery and mitigate packet loss.
What do you think? Do you have any experience, good or bad, with per-packet load balancing? In the next blog post, I am going to share real world examples and results.
Add a comment.