Why do we need specialized switches in Data Centers?


data center

Whats the big deal about Data centers and why do they need special routers and switches anyway? Why cant they use the existing switches that folks use in their back offices or service providers in their networks. What’s so special, really, about a bunch of servers that need Internet connectivity, huh?

Working in the metro Ethernet space all my life I wasn’t sure if I really understood the hype and the reason why Data centers required specialized HW.

It’s only once I started reading about Data centers and how they work and what they’re supposed to do that I was able to appreciate their need for specialized HW – and why the existing products may not be cut for them.

In the world of Wall Street, milliseconds can mean billions of dollars. Really, am not kidding here. Packets carrying Wall Street transactions get delivered to the switch and are then forwarded to the server in the Data Center. There they ride up the protocol stack to the application that executes the trade. The commit message then has to go back down the stack and then be sent over the wire to the switch. The switch does a lookup in its forwarding tables and sends it out on an egress destination port.

One of the things that would differentiate one Data Center switch from the other would be the time it takes for the switch to process the incoming packet, the amount and the nature of queuing that happens (which directly affects the latency), the serialization delays at the ingress and egress, and other factors that can contribute to adding a few microseconds to each packet processing (or transaction in Wall Street speak).

So is adding a few microseconds really a big deal?

Oh Yes, you bet it is – especially in the big bad world of Wall Street.

wall street

You only need to google for “high-frequency trading” and you would understand why it’s the suddenly become one of the most talked about thing in the Wall Street.

Lets see how shaving off a few microseconds can help?

A Mutual Fund house places an order to purchase 100000 shares of a company ABC that’s currently going at $10. NASDAQ (or some other exchange) could offer a few selected high frequency traders a peek into the incoming orders for 30 milliseconds or so (this is illegal, but there are loopholes in the system). The high frequency trader, knows that a purchase order for 100000 shares of ABC is coming and immediately picks up all the available shares at $10. After a few seconds, the Mutual Fund house order hits the market place and the high frequency trader sells their shares at $10.50, pocketing $50000 from a single transaction. Now, multiply this by the average number of transactions that typically take place in an Exchange and you would you arrive at the staggering amount of $$$ that’s at stake here.

Its thus imperative that the high speed computers that are doing all the number crunching have supporting network infrastructure that can help them in making the kill. Lower network latency and increased throughput means faster and better profits for the trading companies.

Firms using high frequency trading earned over $21 billion in profits last year. The TABB Group estimates that a 5 millisecond delay in transmitting an automatic trade can cost a broker 1% of its flow; which could be worth $4 million in revenues per millisecond. According to Reuters, trading a stock is now far faster than a blink of an eye or the speed of a lightning strike.

In fact several high frequency traders house their systems as close as possible to the exchanges to minimize the latency in executing their orders.

There are also other environments where low latency is desirable. Environments such as computer animation studios that may spend 80 to 90 hours rendering a single frame for a 3D movie, or scientific compute server farms (for Computational Fluid Dynamics) that might involve tens of thousands of compute cores. If the network is the bottleneck within those massive computer arrays, the overall performance is affected.

The Data centers thus patently need switches that have extremely extremely low latency in forwarding packets.

So what are the other things that the Data center switches must support – and which may not be available in ordinary switches?

Micro-bursting often happens in Data Centers, wherein the buffers overrun and the switches drop packets. The problem is that these micro-bursts happen often at microsecond intervals, so the switches may not report them. A good Data Center switch will absorb the micro-burst and forward the packets without dropping ’em.

Data centers as we just saw are designed for critical systems that require high availability. This means redundant power, efficient cooling, secure access, ideally no down time, and a whole lot of other things, but most of all, it means no single points of failure.

Every device in a data center should have dual power supplies, and each one of those power supplies should be fed from independent power feeds. The power supplies are sized such that the device operates with only power path. All devices in a data center should have front-to-back airflow, or ideally, airflow that can be configured front to back or back to front. Thermal guidelines for Data Centers is a science by itself and there is more than petabytes of information available on the net on how this needs to be effectively done.

All devices in a data center should support the means to upgrade, replace, or shut down any single chassis at any time without interruption to meet the hard Service Level Agreements (SLAs). In-Service Software Upgrades (ISSU) should ideally be available, but this can be circumvented by properly distributing load to allow meeting the prior requirement. Data center devices should offer robust hardware, even NEBS compliance where required, and robust software to match.

This isnt the most exhaustive list of the things required out of switches deployed in Data Centers, and only serves to give a hint of whats needed there.

Oh and btw, I must finish this post and rush to place an order on NASDAQ before some high frequency trader preempts me and books all the profits!

2 thoughts on “Why do we need specialized switches in Data Centers?

  1. The ‘need’ for faster transaction speeds is a red herring. The ‘billions of dollars’ is actually split between hijacked transactions and rapid transaction turnovers. Traders with more leverage, being marginally closer to the switch can jump between the initial buy and commit transactions, buying just before the commit and raising the price for the end user. Then too we have an average transaction hold on the markets now of 10 seconds.

    This is a case where an architectural rethink is in order because the problem is not ‘I need to be quicker’, but ‘we need to stop the hijacks and rapid buys and sells’. Providing a fair market will prevent the hijacks, and that is in the works now. The second problem can be resolved with micro taxes or minimum hold times. The micro tax, paid as insurance against trader failure (instead of relying on government welfare) will naturally slow rapid, micro transactions while being far too small to impact transactions for investment purposes that are held much longer.

    Sometimes the answer to a technical problem is to change the rules of engagement.

    Like

Leave a comment