NFV and SDN – The death knell for the huge clunky routers?

Last IETF i ran into a couple of hallway discussions where the folks were having a lively debate on whether Network Function Virtualization (NFV) and Software Defined Networking (SDN) will eventually sound the death knell for huge clunky hardware vendors like Cisco, Juniper, Alcatel-Lucent, etc. I was quickly apprised about some Wall Street analyst’s report that projected a significant drop in Cisco’s revenue over the next couple of years as service providers moved to SDN and NFV solutions . I heard claims about how physical routers (that i so lovingly build in AlaLu) will get replaced by virtual routers (vRouters) and other server based software that even small startups could build. The barrier to entry in the service provider markets had suddenly been lowered and the monopoly of the big 3 was being ominously challenged. There was talk about capex spending reduction happening in the service provider networks and how a few operators were holding on to their purchase orders to see how the SDN and NFV story unfurled. There was then a different camp that believed that while SDN and NFV promised several things, it would take time before things got really deployed and started affecting capex spending and OEM’s revenues.

So whats the deal?

Based on my conversation with several folks actively looking into SDN/NFV and a good bit of reading I understand that operators are NOT interested in replacing their edge aggregation and core routers with software driven vRouters. They still want to continue with those huge clunky beasts with full control plane intelligence embedded alongside their  packet pushing data plane. These routers are required to respond to network events in real time (remember FRR?) to prevent outages and slowdowns. Despite all performance improvements the general purpose processors can typically process not more than 2-3 Gbps per core (Intel with DPDK module and APIs for Open Virtual Switch promises better throughput) which is embarrassingly slow when compared to the throughput of 400-600 Gbps thats possible with NPUs and ASICs today. Additionally routers using non-ethernet ports (DSL, PON, Coherent Optical, etc)  cannot be easily virtualized since the general purpose CPUs cannot perform the network functions along with the DSP components required to support these ports.

So while a mobile gateway that essentially forwards packets can be virtualized, it would only make sense to do this where the amount of traffic its handling is relatively small.

So where can we deploy these NFV controlled server based vRouters?

The Provider Edge (PE) routers does several things today, few of which could be easily moved out to be implemented on standard server hardware. ETSI’s NFV Use cases document (case #2)  identifies vPE as a potential NFV use case. The “PE” routers in the MPLS world connects the customer edge (CE) router at the customer premises to the P routers in the provider network. The PE router serves as the service delimiter where it provides L3 VPNs, VPLS, VLL, CDNs and other services to the customers.

The ETSI NFV use-case document (case #2) describes how enterprises are deploying multiple services in branch offices; several of these enterprises use dedicated standalone appliances to provide these services (firewalls, IDS/IPS, WAN optimization, etc), which is “cost prohibitive, inflexible, slow to install and difficult to maintain”.

As a result, many enterprises are looking at outsourcing the virtualization of enterprise CPE (access router) into the operator’s network.

Increased capex and opex pressure is edging enterprises and providers to look at virtualization capabilities made possible by NFV. So, lets look at what all can be virtualized by NFV.

The ETSI NFV use-case document states that “Traditional IP routers  based on custom hardware and software are amongst the most capital-intensive portions of service-provider infrastructure. PE routers run out of control plane resources before they run out of data plane resources and virtualization of control plane functions improves scalability.”

It further states that moving some of the control plane to equivalent functionality implemented in standard commercial servers deploying NFV can result in significant savings.

The figure below gives an idea of the components that can be moved out of the PE router and onto an NFV-powered server.

Network functions/services that can be offloaded from the PE router
Network functions/services that can be offloaded from the PE router

If we’re able to push out the functions/services shown in the figure above, the PE router effectively gets reduced to a router thats mainly pushing the packets out and vPE, the device for service delivery. NFV appears to be most effective at the edge of the network where customers are served — this also happens to be mostly ethernet, which works in the favor of NFV since other ports cannot be served as effectively.

Operators believe NFV can be used for mobile packet core functions for 3G and EPC. LTE operators believe that while basic packet pushing functions must still reside in the routers, the other ancillary functions that have been added to the routers over the time are good candidates for NFV. We can keep BRAS, firewalls, IDS, WAN optimizers, and other service functions separate and use the physical router for merely transferring the packets.

Clearly, the vPE can handle many network functions that are currently done by the conventional physical routers. While the PE may still handle pushing the packets, the intelligence for many of the services typically handled by the PE can be moved to vPE. This is a paradigm shift from what the PE routers have been doing all this while. The network functions and services that can be moved to vPE are:

  • Mobile packet core functions for 3G and LTE EPC
  • Firewalls (FW) and IDS/IPS (Intrusion Detection and Intrusion Prevention systems)
  • Deep Packet Inspection (DPI)
  • CDNs (content delivery networks) and caching
  • IP VPNs – control plane to set up the MPLS VPNs
  • VLLs and VPLS – control plane to set up the MPLS VPNs

These functions can be virtualized to run either on the servers under NFV or can be SDN controlled. Where these reside in the network will depend upon the QoS and QoE (Quality of Experience) required by the customers. If latency and speed is an issue, the functions should reside in servers close to the customers. But if latency is not an issue the functions could reside deep in the provider network or a remote data center.

Conclusion

Operators will deploy NFV and SDN, which will impact their buying decisions. Its clear that they will not be replacing their core and  edge aggregation routers with NFV driven software solutions. Instead, NFV will be used at the edge to offload service functions from the HW PE router onto servers with vPE in the NFV environment to deliver new services agilely to end users and generate higher revenue.

There is thus no need for the Ciscos, Junipers and Alalu’s of the world to worry about falling revenues since the NFV powered solutions are not targeting their highest margain businesses — at least not yet!

BFD in the new Avatar

 

BFDWe all love Bi-directional Forwarding Detection (BFD) and cant possibly imagine our lives without it. We love it so much that we were ready with sabers and daggers drawn when we approached IEEE to let BFD control the individual links inside a LAG — something thats traditionally done by LACP.

Having done that, you would imagine that people would have settled down for a while (after their small victory dance of course) — but no, not the folks in the BFD WG. We are now working on a new enhancement that really takes BFD to the next level.

There isnt anything egregiously wrong or missing per se in BFD today. Its just not very optimal in certain scenarios and we’re trying to plug those holes (and doing our bit to ensure that folks in data comm industry have ample work and remain perennially employed).

Ok, lets not be modest – there are some scenarios where it doesnt work (as we shall see).

So what are we fixing here?

Slow Start

Well for one, BFD takes awfully looooong to bring up the session. Remember BFD starts with sedate timers and then slowly picks up (each side needs to come to an agreement on the rate at which they will send packets) . So it takes a while before BFD can really be used for path/end node liveliness detection. If BFD is being used to validate an MPLS path/LSP then it will take a few additional seconds for BFD to come up because of the LSP ping bootstrapping procedures (RFC 5884).

In certain deployments, this delay is bad and we want to eliminate it. It is expected that some MPLS deployments would require traffic engineered LSPs to be created dynamically, driven by external applications as in Software Defined Networks (SDN). It is operationally critical to ensure that the forwarding paths are up (via BFD) before the applications start utilizing the newly created tunnels. We cant hence wait for BFD to take its time in coming up since the applications are ready to push data down the tunnels. So, something needs to be done to get BFD to come up FAST!

This is an issue in SDN domains where a centralized controller is managing and maintaining the dynamic network. Since the tunnels are being engineered by this centralized entity we want to be really sure that the new tunnel is up before sending traffic down that path. In the absence of additional control protocols (eg. RSVP) we might want to use BFD to ensure that the path is up before using it. Current BFD, with large set up times, can become a bottle neck. If the centralized controller can quickly verify the forwarding path, it can steer the traffic to the traffic engineered tunnel very quickly without adversely affecting the service.

The problem exacerbates as the scale of the network and the number of traffic engineered tunnels increase.

Unidirectional Forwarding Path Validation

The “B” in BFD, stands for “Bi-directional” (in case you missed that). The protocol was originally defined to verify bidirectional connectivity between two nodes. This means that when you run BFD between routers A and B, then both A and B come to know when either router goes down (or when something nasty happens to the link). However, there are many scenarios where only one of the routers is interested in verifying the data plane continuity between the two nodes (e.g., static route using BFD to validate reachability to the next-hop router OR a Unidirectional tunnel using BFD to validate reachability to the egress node). In such cases, validating the reverse direction is not required.

However, traditional BFD requires the other side to maintain the entire BFD state even if its not interested in the liveliness of the remote end.  So if you have “n” routers using a particular gateway, then the gateway has to maintain “n” BFD sessions with all its clients. This is not required and can easily be done away with.

Anycast Addresses

Anycast addressing is used for high availability, fast recovery, load balancing and dispersed deployments where the IGPs direct the traffic to the nearest server(s) within a group of potential servers, all sharing the same Anycast address. BFD as defined today is stateful, and hence cannot work with Anycast addresses.

With the growing need to use Anycast addresses for higher reliability (DNS, multicast, 6to4, etc) there is a need for a BFD variant that can work with Anycast addresses.

BFD Fault Isolation

BFD works in a binary state – it either tells you that the session is UP or its DOWN. In case of failures it doesnt help you identify and localize the fault. Using other tools to isolate the fault may not necessarily work as the OAM packets may not follow the exact same path as the BFD packets (e.g., when ECMP is employed).

There is hence a need for a BFD variant that has some capabilities that can help in fault isolation.

So, where does this lead to?

We have attempted to fix all the issues that i have described above in a new BFD variant that we call the “Seamless BFD” (S-BFD). Its stateless and the receiver (or the reflector) responds with an S-BFD response packet whenever it receives an S-BFD packet from the source. You can imagine this as a ping-pong game between the source and the destination routers. The source (or the client in S-BFD speak) wants to check if the path to the destination (or the Reflector in S-BFD speak) is UP or the reflector is UP and sends an S-BFD “ping” packet. The Reflector upon receiving this, responds with a S-BFD “Pong” packet.  The client upon receiving the “Pong” knows that the Reflector is alive and starts using the path.

Each Reflector selects a well known “Discriminator” that all the other devices in the network know about. This can be statically configured, or a routing protocol can be used to flood/distribute this information. We could use OSPF/IS-IS within an AS and BGP across the ASes. Any clinet that wants to send an S-BFD packet to this Reflector (or a server if it helps) sends the S-BFD packet with the peer’s Discriminator value.

A reflector receiving an S-BFD packet with its own Discriminator value responds with a S-BFD packet. It must NOT transmit any BFD packet based on a local timer expiration.

A router can also advertise more than one Discriminator value for others to use. In such cases it should accept all S-BFD packets addressed to any of those Discriminator values. Why would somebody do that?

You could, if you want to implement some sort of redundancy. A node could choose to terminate S-BFD packets with different Discriminator values on different line cards for load distribution (works for architectures where a BFD controller in HW resides on a line card). Two nodes can now have multiple S-BFD sessions between them (similar to micro-BFD sessions that we have defined for the LAG in RFC 7130) — where each terminates on a different line card (demuxed using different Discriminator values). The aggregate BFD session will  only go down when all the component S-BFD sessions go down. Hence the aggregate BFD session between the two nodes will remain alive as long as there at least one component S-BFD session alive. This is another use case that can be added to S-BFD btw!

This helps in the SDN environments where you want to verify the forwarding path before actually using it. With S-BFD you no longer need to wait for the session to come up. The centralized controller can quickly use S-BFD to determine if the path is up. If the originating node receives an S-BFD response from the destination then it knows that the end point is alive and this information can be passed to the controller.

Similarly applications in the SDN environments can quickly send a S-BFD packet to the destination. If they receive an S-BFD response then they know that the path can be used.

This also alleviates the issue of maintaining redundant BFD sesssion states on the servers since they only need to respond with S-BFD packets.

Authentication becomes a slight challenge since the reflector is not keeping track of the crypto sequence numbers (remember the point was to make it stateless!). However, this isnt an insurmountable problem and can be fixed.

For more sordid details refer to the IETF draft in the BFD WG which explains the Seamless BFD protocol and another one with the use-cases. I have not covered all use cases for Seamless BFD (S-BFD) and we have a few more described there in the use-case document.

iOS7’s impact on networks worldwide

Apple releases an iOS update and the networks all across the world witness a spike of almost 100% in the average traffic that they receive. Apple delivers its content using Akamai, which allegedly handles 20% of world’s total web traffic. Akamai is thus in a unique position to provide a view of whats happening on the web, at any given instant in time. Akamai logs clearly show an over all increase in Internet traffic and the hotspots in Europe soon after Apple released its iOS7.

Akamai
Akamai showing traffic hotspot in Europe

Most service providers saw Akami and Limelight traffic up by an average of 300-700% immediately after iOS7 was released.

Being an Android user myself, i found iOS7’s release with the massive increase in the Internet traffic reported all over the world quite insidious. Honestly, i was a trifle concerned with what iOS7 was internally doing to result this.

It turned out to be quite an anti-climax when i realized that the spurt in network traffic was just because of Apple devices upgrading to the newer iOS. The iOS7 upgrade for the phones is around 900MB, and that for the ipads is around 1.2GB. Given that there are quite a few of these devices out there, one only needs to multiply this with the upgrade size to realize the traffic volumes that service providers all across the world are grappling with.

Its well known that Apple fans dont want to wait before they go in for an upgrade. The iOS7 adoption rate has been the highest ever for any platform (beating their own iOS6 rate, which was in itself phenomenal in all respects). Its claimed that within two days of its release, iOS7 is already running on more than half of all Apple devices out there (which btw is already quite high).

Google is perplexed with how it can improve the miserably low adoption rate for their Android OS.  This seems to stem from the fact that most Android devices just do not receive updates in a timely manner and the ones that do, only go for an update roughly six months after a new version is released.

Jelly Bean (the latest version of Android) currently is on a fewer Android devices than iOS 7 on iOS devices. This may not seem mind boggling, until you realize that iOS 7 has only been out for only 5 days (as of this post) whereas Android Jelly Bean was been around since a little more than a year and half.

iOS’s high adoption rate is a headache for several service providers since, lets face it, all of them oversubscribe their access links. This is done by design, since its assumed that not everyone would demand full bandwidth usage at the same time. Usually it works well, sometimes it doesnt, as we’ll just see.

Most homes have multiple iOS devices, so this translates to each household doing 5-6 GB worth of iOS updates in a single day. Multiply this by thousands and you’ll see the volume of traffic each provider sees around the week whenever an iOS  is released.

Having a CDN which is caching the iOS7 update, would definitely help in any large deployment. What could, suggest some people, also help is if each one of these Apple “i” devices advertise an “iOS update available” locally and other “i” devices merely downloaded the update from there, as long as the signature is valid (all images are signed).

This at the very least  can improve the user experience (no more facebook/twitter updates on how slow their iOS upgrade was) and can potentially help in avoiding clogging the Internet tubes.

Few service providers are furious with Apple as they see their customers complaining that their network/Internet access is slow. There is a camp that thinks its pretty dumb on Apple’s part to make their OS update available globally on the same day — Microsoft and others have a strategy where they provide incremental downloads. Others suggest that Apple should do this on weekends, when traffic volumes are low.  I strongly disagree with this line of reasoning and believe its parochial to call on a war on Apple — remember, iOS updates are user pulls, not Apple pushes. Its the Operators who should update their infrastructure to gracefully handle such events — today its an iOS7 release, tomorrow it could be something else (Obama in a political sex scandal?). If this means getting fatter pipes, or talking to CDN vendors to put caches in their networks or putting up their own caches, then this ought to be done. If they do not/cannot have an CDN cache then they could explore connecting to an Internet Exchange (IX) that does. IX peering, i am told, is not prohibitively expensive in most countries.

Ben quite succinctly sums it up on a nanog mailing list, “Your (the service provider) user is paying you to push packets. If that’s causing you a problem, you either need to review your commercial structure (i.e. charge people more) or your technical network design. Face the facts, what with everyone jumping on the “cloud” bandwagon, the future is only going to see you pushing more packets, not less !  So if you can’t stand the heat, get out of the kitchen (or the xSP industry).”

How bad is the OSPF vulnerability exposed by Black Hat?

ddos-attack

I was asked a few weeks ago by our field engineers to provide a fix for the OSPF vulnerability exposed by Black Hat last month. Prima facie there appeared nothing new in this attack as everyone knows that OSPF (or ISIS) networks can be brought down by insider attacks. This isnt the first time that OSPF vulnerability has been announced at Black Hat. Way back in 2011 Gabi  Nakibly, the researcher at Israel’s Electronic Warfare Research and Simulation Center, had demonstrated how OSPF could be brought down using insider attacks.  Folks were not impressed, as anybody who had access to one of the routers could launch attacks on the routing infrastructure. So it was with certain skepticism that i started looking at yet another OSPF vulnerability exposed by Gabi, again at Black Hat. Its only when i started delving deep into the attack vector that the real scale of the attack dawned on me. This attack evades OSPF’s natural fight back mechanism against malacious LSAs which makes it a bit more insidious than the other attacks reported so far.

I exchanged a few emails with Gabi when i heard about his latest exposé. I wanted to understand how this attack was really different from the numerous other insider attacks that have been published in the past. Insider attacks are not very interesting, really. Well, if you were careless enough to let somebody access your trusted router, or somebody was smart enough to masquerade as one of your routers and was able to inject malicious LSAs then the least that you can expect is a little turbulence in your routing infrastructure. However, this attack stands apart from the others as we shall soon see.

OSPF (and ISIS too) has a natural fight back mechanism against any malacious LSA that has been injected in a network. When an OSPF router receives an LSA that lists that router as the originating router (referred to as a self-originated LSA) it looks at the contents of the LSA (just in case you didnt realize this). If the received LSA looks newer than the LSA that this router had last originated, the router advances the LSA’s LS sequence number one past the received LS sequence number and originates a new instance of this LSA. In case its not interested in this LSA, it flushes the LSA by originating a new LSA with age set to MaxAge.

All other routers in the network now update their LS database with this new instance and the malacious LSA effectively gets purged from the network. Viola,its that simple!

As a result of this, the attacker can only flood malacious LSAs inside the network till the router that the malacious LSA purports to come from (victim router) receives a copy. As soon as this router floods an updated copy, it doesnt take long for other routers in the network to update their LS DB as well – the flooding process is very efficient in disseminating information since network diameters are typically not huge, and yes, packets travel with the speed of light. Did you know that?

In the attack that Gabi described, the victim router does not recognize the malacious LSA as its own and thus never attempts at refreshing it. As a result the malicious LSA remains stealthily hidden in the routing domain and can go undetected for a really long time. Thus by controlling a single router inside an AS (the one that will flood the malacious LSA), an attacker can gain control over the entire routing domain. In fact, an attacker need not even gain control of an entire router inside the AS.  Its enough if it can somehow inject the malacious LSAs over a link such that one of the OSPF routers in the network accept this. In the media release, Black Hat claimed ” The new attack allows an attacker that owns just a single router within an AS to effectively own the routing tables of ALL the routers in that AS without actually owning the routers themselves. This may be utilized to induce routing loops, network cuts or longer routes in order to facilitate DoS of the routing domain or to gain access to information flows which otherwise the attacker had no access to.

So what is this attack?

Lets start by looking at what the LS header looks like.

LS Header

In this attack we are only interested in the two fields, the Link State ID and the Advertising Router, in the LS Header. In the context of a Router LSA, the Link State ID identifies the router whose links are listed in the LSA. Its always populated with the router ID of that router.  The Advertising Router field identifies the router that initially advertised (originated) the LSA. The OSPF spec dictates that only a router itself can originate its own LSA (i.e. no router is expected to originate a LSA on behalf of other routers), therefore in Router LSAs the two fields – ‘Link State ID’ and ‘Advertising Router’ – must have the exact same value. However, the OSPF spec does not specify a check to verify this equality on Router LSA reception.

Unlike several other IETF standards, the OSPF spec is very detailed, leaving little room for any ambiguity in interpreting and implementing the standard. This is usually good as it results in interoperable implementations where everybody does the right thing. The flip side however is that since everybody follows the spec to the tittle, a potential bug or an omission in the standard, would very likely affect several vendor implementations.

This attack exploits a potential omission (or a bug if you will) in the standard where it does not mandate that the receiving router verifies that the Link State ID and the Advertising Router fields in the Router LSA are the exact same value.

This attack sends malacious Router LSAs with two different values in the LS header. The Link State ID carries the Router ID of the router that is being attacked (the victim) and the Advertising Router is set to some different (any) value.

When the victim receives the malacious Router LSA, it does not refresh this LSA as it doesnt recognize this as its own self generated LSA. This is because the OSPF spec clearly says in Sec 13.4 that “A self-originated LSA is detected when either 1) The LSA’s Advertising Router is equal to the router’s own Router ID or 2) the LSA is a network LSA .. “.

This means that OSPF’s natural fight back mechanism is NOT triggered by the victim router as long as the field ‘Advertising Router’ of a LSA is NOT equal to the victim’s Router ID. This is true even if the ‘Link State ID’ of that LSA is equal to the victim’s Router ID. Going further it means no LSA refresh is triggered even if the malacious LSA claims to describe the links of the victim router!

When this LSA is flooded all the routers accept and install this LSA in their LS database. This exists along side the valid LSA originated by the victim router. Thus each router in the network now has two Router LSAs for the victim router – the first that was genuinely originated by the victim router and the second that has been inserted by the attacker.

When computing the shortest path first algorithm, the OSPF spec in Sec 16.1 requires implementations to pick up the LSA from the LS DB by doing a lookup “based on the Vertex ID“. The Vertex ID refers to the Link State ID field in the Router LSAs. This means that when computing SPF, routers only identify the LSAs based on their Link State ID. This creates an ambiguity on which LSA will be picked up from the LS database. Will it be the genuine one originated by the victim router or will it be the malacious LSA injected by the attacker? The answer depends on how the data structures for LS DB lookup have been implemented in the vendor’s routers. Ones that pick up the wrong LSA will be susceptible to the attack. The ones that dont, would be oblivious to the malacious LSA sitting in their LSA DBs.

Most router implementations are vulnerable to this attack since nobody expects the scenario where multiple LSAs with the same Link State ID will exist in the LS DB. It turns out that at least 3 major router vendors (Cisco, Juniper and Alcatel-Lucent) have already released advisories and announced fixes/patches that fixes this issue. The fix for 7210 would be out soon ..

Once again, the attacker does not need to have an OSPF adjacency to inject the forged LSAs.

Doing this is not as difficult as we might think it is. There is no need for the attacker to access the LS DB sequence number – all it needs to do is to send an LSA with a reasonably high sequence number, say something like MAX_SEQUENCE – 1 to get this LSA accepted.

The attack can also be performed without complete information about the OSPF topology. But, this is highly dependent on the attack scenario and what piece of false information the attacker wishes to advertise on behalf of the victim. For example, if the attacker wishes to disconnect the victim router from the OSPF topology then merely sending an empty LSA without knowing the OSPF topology in advance would also work. In the worst case, the attacker can also get partial information on the OSPF topology by using trace routes, etc. This way the attacker can construct LSAs that look very close to what has been originally advertised by the victim router, making it all the more difficult to suspect that such LSAs exist in the network.

DNS poisoned for LinkedIn. Affects us? Sure, it does.

linkedin

If you were unable to access LinkedIn for almost the entire day earlier this week, then you can take solace in the fact that you were not the only one, not able to. Almost half the world shared your misery where all attempts to access LinkedIn (and several other websites) went awry. This purportedly happened because  a bunch of hackers decided to poison the DNS entries for LinkedIn and some other well known websites (fidelity.com being another).

Before we delve into the sordid details of this particular incident lets quickly take a look at how DNS works.

Whenever we access linkedin.com, our computer must resolve this human-readable address “linkedin.com” into a computer-readable IP address like “216.52.242.86″ thats hosting this website. It does this by requesting a DNS server to return an IP address that can be used. The DNS server responds with one or more IP addresses with which you can reach linkedin.com. Your computer then connects to that IP address.

So where is this DNS server located that i just spoke about?

This DNS server lies with your Internet service provider, which caches information from other DNS servers.  The router that we have at home also functions as a DNS server, which caches information from the ISP’s DNS servers — this is done  so that we dont have to perform a DNS lookup each time we have to access a website for which we have already resolved the IP address.

Now that we know the basics, lets see what DNS poisoning is?

A DNS cache is said to be poisoned if it contains an invalid entry. For example, if an attack “somehow” gains control of a DNS server and changes some of the information on it — it could for instance say that citibank.com actually points to an IP address the attacker owns — that DNS server whenever requested to resolve citibank.com would tell its users to look for citibank.com at the wrong address. The attacker’s address could potentially contain some sort of malicious phishing website, which could resemble the original citibank.com or could simply be used to drop all traffic. The latter is done when ISPs want to block all access to a particular website. China typically does it for lot of websites — its called the Great Firewall of China.  There are multiple techniques which China employs to implement their censorship and one of them is DNS poisoning (more here).

Wildfire

DNS poisoning spreads like wild fire because of how it works.  Clearly Internet service providers cannot hold information about all websites in their DNS caches – they get their DNS information from other DNS servers. Now assume, that they are getting their DNS information from a compromised server. The poisoned DNS entry or entries will spread to the Internet service providers and get cached there. It will then spread to other ISPs that get information from this DNS server. And it wouldnt stop at this, it would spread to routers at campuses, homes and the DNS caches on individual user computers. So everybody who requests for the DNS resolution of the hijacked website will receive an incorrect response and will forward traffic to the address specified by the attacker.

LinkedIn.com and a number of other organizations have registered their domain names with Network Solutions. For some inexplicable reason their DNS nameservers were replaced with nameservers at ztomy.com. The nameservers at ztomy.com were configured to reply to DNS requests for the affected domains with IP addresses in the range 204.11.56.0/24. This address range is belongs to confluence networks, so all traffic bound to LinkedIn was re-routed to a networks hosted by confluence networks.

But what caused the name servers to be replaced?

According to Network Solutions (NS), they were hit by a distributed denial-of-service (DDOS) attack on night of 19/06. This is certainly is plausible since Network Solutions, being the original registrar for .com, .net, and .org domain names, is certainly an attractive target for attackers. Most of you would remember the (in)famous August 2009 NS server breach which allegedly led to the exposure of names, addresses, and credit card numbers of more than 500,000 people who made purchases on web sites hosted by the NS.

A spokesperson from Network Solutions had the following to say regarding the DNS poisoning issue:

“In the process of resolving a Distributed Denial of Service (DDoS) incident on Wednesday night, the websites of a small number of Network Solutions customers were inadvertently affected for up to several hours.”

They have reassured customers that no confidential data has been compromised as a result of the incident.

The jury meanwhile, is still out on whether this was a configuration error or a coordinated DNS attack on Network Solutions.

Regardless of what it was, the fact is that enormous amount of  LinkedIn traffic was redirected to some other network. This is should make all of us very nervous since LinkedIn does not use Secure Socket Layer (SSL), which means that all communication between you and LinkedIn goes in plaintext — leaving you vulnerable to eavesdropping and man-in-the-middle attacks. If an attacker is able to intercept all data being sent between a browser and a web server they can see and use that information. In this event all traffic bound to LinkedIn was diverted to IP addresses owned by Confluence Networks.

This isnt the first time LinkedIn has compromised the security of its users. Earlier in June 2012, nearly 6.5 million encrypted passwords were compromised when they were dumped onto a Russian hacker forum. Its around this time that a team of mobile security researchers discovered that LinkedIn’s mobile app for iOS was transmitting information about calendar entries made on that app, including sensitive information like meeting locations and passwords, back to LinkedIn’s servers without users’ knowledge.

Not only is this a clear violation of their user’s privacy (which is a different discussion btw) but is also extremely dangerous if this data transfer is not being done securely, as this would leave LinkedIn users very vulnerable to eavesdropping attacks.

So when the DNS entry for LinkedIn was poisoned we know that all our confidential information was diverted to unknown servers that can mine that data in whatever manner they find most amusing. I just hope that you didnt have any confidential data plugged into your LinkedIn iOS app, as somebody somewhere may just be reading all that as you read this blog post.

Why do we need specialized switches in Data Centers?

data center

Whats the big deal about Data centers and why do they need special routers and switches anyway? Why cant they use the existing switches that folks use in their back offices or service providers in their networks. What’s so special, really, about a bunch of servers that need Internet connectivity, huh?

Working in the metro Ethernet space all my life I wasn’t sure if I really understood the hype and the reason why Data centers required specialized HW.

It’s only once I started reading about Data centers and how they work and what they’re supposed to do that I was able to appreciate their need for specialized HW – and why the existing products may not be cut for them.

In the world of Wall Street, milliseconds can mean billions of dollars. Really, am not kidding here. Packets carrying Wall Street transactions get delivered to the switch and are then forwarded to the server in the Data Center. There they ride up the protocol stack to the application that executes the trade. The commit message then has to go back down the stack and then be sent over the wire to the switch. The switch does a lookup in its forwarding tables and sends it out on an egress destination port.

One of the things that would differentiate one Data Center switch from the other would be the time it takes for the switch to process the incoming packet, the amount and the nature of queuing that happens (which directly affects the latency), the serialization delays at the ingress and egress, and other factors that can contribute to adding a few microseconds to each packet processing (or transaction in Wall Street speak).

So is adding a few microseconds really a big deal?

Oh Yes, you bet it is – especially in the big bad world of Wall Street.

wall street

You only need to google for “high-frequency trading” and you would understand why it’s the suddenly become one of the most talked about thing in the Wall Street.

Lets see how shaving off a few microseconds can help?

A Mutual Fund house places an order to purchase 100000 shares of a company ABC that’s currently going at $10. NASDAQ (or some other exchange) could offer a few selected high frequency traders a peek into the incoming orders for 30 milliseconds or so (this is illegal, but there are loopholes in the system). The high frequency trader, knows that a purchase order for 100000 shares of ABC is coming and immediately picks up all the available shares at $10. After a few seconds, the Mutual Fund house order hits the market place and the high frequency trader sells their shares at $10.50, pocketing $50000 from a single transaction. Now, multiply this by the average number of transactions that typically take place in an Exchange and you would you arrive at the staggering amount of $$$ that’s at stake here.

Its thus imperative that the high speed computers that are doing all the number crunching have supporting network infrastructure that can help them in making the kill. Lower network latency and increased throughput means faster and better profits for the trading companies.

Firms using high frequency trading earned over $21 billion in profits last year. The TABB Group estimates that a 5 millisecond delay in transmitting an automatic trade can cost a broker 1% of its flow; which could be worth $4 million in revenues per millisecond. According to Reuters, trading a stock is now far faster than a blink of an eye or the speed of a lightning strike.

In fact several high frequency traders house their systems as close as possible to the exchanges to minimize the latency in executing their orders.

There are also other environments where low latency is desirable. Environments such as computer animation studios that may spend 80 to 90 hours rendering a single frame for a 3D movie, or scientific compute server farms (for Computational Fluid Dynamics) that might involve tens of thousands of compute cores. If the network is the bottleneck within those massive computer arrays, the overall performance is affected.

The Data centers thus patently need switches that have extremely extremely low latency in forwarding packets.

So what are the other things that the Data center switches must support – and which may not be available in ordinary switches?

Micro-bursting often happens in Data Centers, wherein the buffers overrun and the switches drop packets. The problem is that these micro-bursts happen often at microsecond intervals, so the switches may not report them. A good Data Center switch will absorb the micro-burst and forward the packets without dropping ’em.

Data centers as we just saw are designed for critical systems that require high availability. This means redundant power, efficient cooling, secure access, ideally no down time, and a whole lot of other things, but most of all, it means no single points of failure.

Every device in a data center should have dual power supplies, and each one of those power supplies should be fed from independent power feeds. The power supplies are sized such that the device operates with only power path. All devices in a data center should have front-to-back airflow, or ideally, airflow that can be configured front to back or back to front. Thermal guidelines for Data Centers is a science by itself and there is more than petabytes of information available on the net on how this needs to be effectively done.

All devices in a data center should support the means to upgrade, replace, or shut down any single chassis at any time without interruption to meet the hard Service Level Agreements (SLAs). In-Service Software Upgrades (ISSU) should ideally be available, but this can be circumvented by properly distributing load to allow meeting the prior requirement. Data center devices should offer robust hardware, even NEBS compliance where required, and robust software to match.

This isnt the most exhaustive list of the things required out of switches deployed in Data Centers, and only serves to give a hint of whats needed there.

Oh and btw, I must finish this post and rush to place an order on NASDAQ before some high frequency trader preempts me and books all the profits!

OpenFlow, Controllers – Whats missing in Routing Protocols today?

openflowThere is a lot of hype around OpenFlow as a technology and as a protocol these days. Few envision this to be the most exciting innovation in the networking industry after the vaccum tubes, diodes and transistors were miniaturized to form integrated circuits.  This is obviously an exaggeration, but you get the drift, right?

The idea in itself is quite radical. It changes the classical IP forwarding model from one where all decisions are distributed to one where there is a centralized beast – the controller – that takes the forwarding decisions and pushes that state to all the devices (could be routers, switches, WiFi access points, remote access devices such as CPEs) in the network.

Before we get into the details, let’s look at the main components – the Management, Control and the Forwarding (Data) plane – of a networking device. The Management plane is used to manage (CLI, loading firmware, etc) and monitor the device through its connection to the network and also coordinates functions between the Control and the Forwarding plane. Examples of protocols processed in the management plane are SNMP, Telnet, HTTP, Secure HTTP (HTTPS), and SSH.

The Forwarding plane is responsible for forwarding frames – it receives frames from an ingress port, processes them, and sends those out on an egress port based on what’s programmed in the forwarding tables. The Control plane gathers and maintains network topology information, and passes it to the forwarding plane so that it knows where to forward the received frames. It’s in here that we run OSPF, LDP, BGP, STP, TRILL, etc – basically, whatever it takes us to program the forwarding tables.

Routing Protocols gather information about all the devices and the routes in the network and populate the Routing Information Base (RIB) with that information. The RIB then selects the best route from all the routing protocols and populates the forwarding tables – and Routing thus becomes Forwarding.

So far, so good.

The question that keeps coming up is whether our routing protocols are good enough? Are ISIS, OSPF, BGP, STP, etc the only protocols that we can use today to map the paths in the network? Are there other, better options – Can we do better than what we have today?

Note that these protocols were designed more than 20 years ago (STP was invented in 1985 and the first version of OSPF in 1989) with the mathematics that goes in behind these protocols even further. The code that we have running in our networks is highly reliable, practical, proven to be scalable – and it works. So, the question before us is – Are there other, alternate, efficient ways to program the network?

Lets start with what’s good in the Routing Protocols today.

They are reliable – We’ve had them since last 20+ years. They have proven themselves to be workable. The code that we use to run them has proven itself to be reliable. There wouldn’t be an Internet if these protocols weren’t working.

They are deterministic in that we know and understand them and are highly predictable – we have experience with them. So we know that when we configure OSPF, what exactly will it end up doing and how exactly will it work – there are no surprises.

Also what’s important about today’s protocols are that they are self healing. In a network where there are multiple paths between the source and the destination, a loss of an interface or a device causes the network to self heal. It will autonomously discover alternate paths and will begin to forward frames along the secondary path. While this may not necessarily be the best path, the frames will get delivered.

We can also say that today’s protocols are scalable.  BGP certainly has proven itself to run at the Internet’s scale with extraordinarily large number of routes. ISIS has as per the local folklore proven to be more scalable than OSPF. Trust me when i say that the scalability aspect is not the limitation of the protocol, but is rather the limitation of perhaps the implementation. More on this here.

And like everything else in the world, there are certain things that are not so good.

Routing Protocols work under the idea that if you have a room full of people and you want them to agree on something then they must speak the same language. This means that if we’re running OSPFv3, then all the devices in the network must run the exact same version of OSPFv3 and must understand the same thing. This means that if you throw in a lot of different devices with varying capabilities in the network then they must all support OSPFv3 if they want to be heard.

Most of the protocols are change resistant, i.e., we find it very difficult to extend OSPFv2 to say introduce newer types of LSAs. We find it difficult to make enhancements to STP to make it better, faster – more scalable, to add more features. Nobody wants to radically change the design of these protocols.

Another argument that’s often discussed is that the metrics used by these protocols are really not good enough. BGP for example considers the entire AS as one hop. In OSPF and ISIS, the metrics are a function of the BW of the link. But is BW really the best way to calculate a metric of an interface to feed in to the computation to select the best path?

When OSPF and all the routing protocols that we use today were designed and built they were never designed to forward data packets while they were still re-converging. They were designed to drop data as that was the right thing to do at that time because the mathematical computation/algorithms took long enough and it was more important to avoid loops by dropping packets.  To cite an example, when OSPF comes up, it installs the routes only after it has exchanged the entire LSDB with its neighbors and has reached a FULL state. Given the volume of ancillary data that OSPF today exchanges via Opaque LSAs this design is an over-kill and folks at IETF are already working on addressing this.

We also have poor multipath ability with our current protocols today. We can load balance between multiple interfaces, but we have problems with the return path which does not necessarily come back the way you wanted. We work around that to some extent by network designs that adapt to that.

Current routing protocols forward data based on destination address only. We send traffic to 192.168.1.1 but we don’t care where it came from. In truth as networks get more complex and applications get more sophisticated, we need a way to route by source as well by destination. We need to be able to do more sophisticated forwarding. Is it just enough to send an envelope by writing somebody’s address on an envelope and putting it in a post box and letting it go in the hope that it gets there? Shouldn’t it say that Hey this message is from the electricity deptt. That can go at a lower priority than say a birthday card from grandma that goes at a higher priority. They all go to the same address but do we want to treat them with the same priority?

So the question is that are our current protocols good enough – The answer is of course Yes, but they do have some weaknesses and that’s the part which has been driving the next generation of networking and a part of which is where OpenFlow comes in ..

If we want to replace the Routing protocols (OSPF, STP, LDP, RSVP-TE, etc) then we need something to replace those with. We’ve seen that Routing protocols have only one purpose for their existence, and that’s to update the forwarding tables in the networking devices. The SW that runs the whole system today is reasonably complex, i.e., SW like OSPF, LDP, BGP, multicast is all sitting inside the SW in an attempt to load the data into the forwarding tables. So a reasonably complex layer of Control Plane is sitting inside each device in the network to load the correct data into the forwarding tables so that correct forwarding decisions are taken.

Now imagine for a moment that we can replace all this Control Plane with some central controller that can update the forwarding tables on all the devices in the network. This is essentially the OpenFlow idea, or the OpenFlow model.

In the OpenFlow model there is an OpenFlow controller that sends the Forwarding table data to the OpenFlow client in each device. The device firmware then loads that into the forwarding path. So now we’ve taken all that complexity around the Control Plane in the networking device and replaced it with a simple client that merely receives and processes data from the Controller. The OpenFlow controller loads data directly into the OpenFlow client which then loads it directly into the FIB. In this situation the only SW in the device is the chip firmware to load the data into the FIB or TCAM memories and to run the simple device management functions, the CLI, to run the flash and monitor the system environmentals. All the complexity around generating the forwarding table has been abstracted away into an external controller. Now its also possible that the device can still maintain the complex Control Plane and have OpenFlow support. OpenFlow in such cases would load data into the FIBs in addition to the RIB that’s maintained by the Control Plane.

The Networking OS would change a little to handle all device operations such as Boot, Flash, Memory Management, OpenFlow protocol handler, SNMP agent, etc. This device will have no OSPF, ISIS,RSVP or Multicast – none of the complex protocols running. Typically, routers spend close to 30+% of CPU cycles doing topology discovery. If this information is already available in some central server, then this frees up significant CPU cycles on all routers in the network. There will also be no code bloat – we will only keep what we need on the devices. Clearly, smaller the code running on the devices, lesser is the bugs, resources required to maintain it – all translating into lower cost.

If we have a controller that’s dumping data into the FIB of a network device then it’s a piece of SW – its an application. It’s a SW program that sits on a computer somewhere. It could be an appliance, a virtual machine (VM) or could reside somewhere on a router. The controller needs to have connectivity to all the networking devices so that it can write out, send the FIB updates to all devices. And it would need to receive data back from the devices. It is envisioned that the controller would build a topology of the network in memory and run some algorithm to decide how the forwarding tables should be programmed in each networking device. Once the algorithm has been executed across the network topology then it could dispatch topology updates to the forwarding tables using OpenFlow.

OpenFlow is an API and a protocol which decides how to map the FIB entries out of the controller and into the device. In this sense a controller is, if we look back at what we understand today, very similar to Stack Master in Cisco. So if one has 5 switches in a stack then one of them becomes the Stack Master. It takes all of the data about the forwarding table. It’s the one that runs the STP algo, decides what the FIB looks like and sends the FIB data on the stacking backplane to each of the devices so that each has a local FIB (that was decided by the Stack Master).

To better understand the Controllers we need to think of 5 elements as shown in the figure.

Controller

At the bottom we have the network with all the devices. The OpenFlow protocol communicates with these devices and the Controller. The Controller has its own model of the network (as shown on the right) and presents the User Interface out to the user so that the config data can come in. Via the User Interface the admin selects the rules, does some configuration, instructs on how it wants the network to look like. The Controller then looks at its model of the network that it has constructed by gathering information from the network and then proceeds with programming the forwarding tables in all the network devices to be able to achieve that successful outcome. OpenFlow is a protocol – its not a SW or a platform – it’s a defined information style that allows for dynamic configuration of the networking devices.

A controller could build a model of the network and have a database and then run SPF, RSVP-TE, etc algorithms across the network to produce the same results as OSPF, RSVP-TE running on live devices. We could build an SPF model inside the controller and run SPF over that model and load the forwarding tables in all devices in the network. This would free up each device in the network from running OSPF, etc.

The controller has real time visibility of the network in terms of the topology, preferences, faults, performance, capacity, etc. This data can be aggregated by the controller and made available to the network applications.  The modern network applications can be made adaptive, with the potential to become more network-efficient and achieve better application performance (e.g., accelerated download rates, higher resolution videos), by leveraging better network provided information.

Theoretically these concepts can be used for saving energy by identifying underused devices and shutting them down when they are not needed.

So for one last time, lets see what OpenFlow is.

OpenFlow is a protocol between networking devices and an external controller, or in other words a standard method to interface between the control and data planes. In today’s network switches, the data forwarding path and the control path execute in the same device. The OpenFlow specification defines a new operational model for these devices that separates these two functions with the packet processing path on the switch but with the control functions such as routing protocols, ACL definition moved from the switch to a separate controller. The OpenFlow specification defines the protocol and messages that are communicated between the controller and network elements to manage their forwarding operation.

Added Later: Network Function Virtualization is not directly SDN. However, if youre interested i have covered it here and here.