NFV: Will vRouters ever replace hardware routers?



When i started looking at NFV, i always imagined it being relegated to places in the network that would receive only teeny weeny amount of data traffic since the commodity hardware and software could only handle so much of traffic. I also naively believed that it would be deployed in networks where customers were not uber-sensitive to latency and delay (broadband customers, etc). So if somebody really wanted a loud bang for their buck they had to use specialized hardware to support the network function. You couldnt really use Intel x86-based servers running SW serving customers for whom QoS and QoE were critical and vital. The two examples that leap to my mind are (i) Evolved Packet Core (EPC) functions such as Mobility Management Entity (MME) and BNG environments where the users need to be authorized before they can expect to receive any meaningful services.

While i understood that servers were getting powerful and Intel was doing its bit with its Data Plane Development Kit (DPDK) architecture, it didnt occur to me till recently that we would be seeing servers handling traffic at 10G+ line rate. Vyatta, a Brocade company now, uses vRouters to implement real network functions. Vyatta started with its modest 5400 vRouter that could only handle 1G worth of traffic at the line rate. But then last year it announced 5600 vRouter  that takes advantage of Intel multi-core and DPDK architecture to achieve 10x+ performance. Essentially how DPDK drastically improves the performance is by directly passing the packets from the line card to the code running in the userspace by completely bypassing the high-latency DRAM processing thus speeding up the packet processing. It also supports amongst other things, lockless FIFO implementation  for packet enqueue/dequeue as semaphores and spinlocks are expensive.

The Vyatta 5600 vRouter can be installed on pretty much any x86 based server and can support number of network functions such as dynamic routing, policy-based routing, firewalls, VPN, etc. Vyatta redesigned its software to make use of multiple cores — so while the control plane ran on one core, the data plane was distributed across multiple cores. Using a 4 core processor, they ran control plane on 1 core, and 3 instances of line traffic were handled by the remaining 3 cores.  This way Vyatta was able to handle 10G traffic through a single processor.

Now imagine putting 3-4 such x86 based servers in a network. If (and we look at this in some other blog post) you can split the data traffic equitably, you can achieve close to 30-40G throughput.

Wind River a few weeks ago announced its new accelerated virtual switch (vSwitch) that could deliver 12 million packets per second to guest virtual machines (VMs) using only two processor cores on an industry-standard server platform, in a real-world use case involving bidirectional traffic. 

Many people believe that NFV is best suited to deployed at the edge of the network — basically close to the customers and isnt yet ready for the core or places where the traffic volumes are high or the latency tolerance is low. I agree to this, and covered this aspect in great details here.

What this shows is that its patently possible for virtual routers to run at speeds comparable to regular hardware based routers and can replace them. This augurs well for NFV since it means that it can be deployed in a lot many places in the carrier network than what most skeptics believed till some time back.

Advertisements

7 thoughts on “NFV: Will vRouters ever replace hardware routers?

  1. Hi Manav,

    Thanks for sharing your thoughts. I do agree that software based routers as well other network functions virtualization seemingly possible because all pieces coming together nicely (i.e. faster CPU cores, Packet handling functions through improved drivers, rewritten application software to take full advantage of multiple cores as well as architectures to dedicate cores to handle control and data separately).

    I also see one other important element adding to this equation…all programmable network adaptor cards using FPGAs. This allows to do early stage packet pre processing as per NFV needs (programmability helps to keep up with look up needs which are different from one point in the network to other), packet distribution to VMs using classification meta data as well as offloading any mundane tasks to HW.

    A server appliance with “x86 + FPGA based adaptor card” method will boost the realization of NFV to core deployment much sooner.

    Saikrishna

    Like

  2. Should vRouter be a NFV or part of the NFVI?
    Thanks to various benchmarks done by 6WIND, both can make sense:
    http://www.6wind.com/6windgate-performance/virtual-switching/
    and
    http://www.6wind.com/6windgate-performance/

    It is important to have an agnostic solution that can run on either SmartNICs (PCI boards with multicore CPUs such as Cavium LiquidIO), on dedicated cores of the hypervisors or on vCPUs of the guests.
    Thanks to such agnostic solution, framework such as Openstack/Neutron can be reused without any monkey patches of Neutron (or custom plugins). Same as Openstack, any other framework that are used to manage Linux including all Linux networking protocols like IPsec, Netfilter (firewall and NAT), OVS, Linux bridge, bonding, etc. can be reused.

    About FPGA comments, FPGA can help to provide some offloads, but then it means that DPDK needs a PMD for this FPGA and it means that the fast path is able to handle/cowork with the custom offload capabilities of each use case of the FPGAs.

    I’d like to add one more comment: never forget the IPsec requirements for a vRouter. Without a fat pipe IPsec capabilities, you’ll never be able to interconnect your datacenter for east-west traffics. Even worst: more and more companies are having 1G access, it means that when all branch offices are opening their VPNs, the south-north traffic is becoming very heavy. Currently, 6WINDGate fast path includes an IPsec stack with common DPDK backends for Cavium Nitrox, Intel Quickassist (Cave Creek and Coletocreek), or using AES-NI. It guarantees the best options for each environment while the same dataplane software stack and Linux is reused.
    For instance:
    http://www.6wind.com/6windgate-performance/ipsec/
    means about 200Gbps of IPsec processing for a vRouter.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s