speedo-copy
Internet

Tim Nichols (Global) - Extreme Network Speeds Outpace Network Visibility

According to the latest data from Infonetics, 100Gbps networking is gathering momentum in the USA, and not just within the traditionally ‘early adopter' telco community, either. If you look at the messaging from any of the big infrastructure vendors they're all pushing 40G and 100G switching systems that are capable of moving vast amounts of data inside data centers.

100 Gigabits of data (or 12.5 Gigabytes) is a lot of data, that's for sure. But how much is it in reality? Just for fun, and using some fairly crude math, in 1 second, a fully loaded 100G link is able to transmit any of the following:

4 Full-length Avengers movies
4167 Lady Gaga songs
41667 Kindle eBooks
178571 Simultaneous 'Breaking Bad' Netflix streams
1250000 Simultaneous Skype phone calls
89285714 'LOL' text messages

100G is not for everyone for obvious reasons, but big banks, research institutions, telcos, service providers and organizations serving the media and entertainment sector are leading the charge. The economics of 100G are actually very compelling when contrasted to 10Gbps and even 40G networking. Over optical fiber a 10Gbps link consumes a whole glass strand and uses one connector; 100G delivered using SR-type 10Gbps optics consumes 10 whole strands bonded together, but only one connector. So, if you're moving a LOT of data around then 100G is a perfectly rational choice, even if the price tag is fairly extreme today.

Unfortunately, deploying 100G and living with 100G are two different things. From an operational perspective, 100G network segments are just like any other network segment; they need to be monitored, analyzed and recorded so that issues can be detected and investigated before end users get impacted.

Unlike 10Gbps network segments that can be connected to a 10Gbps monitoring port on an analytics or IDS appliance, there is no such thing as a 100G monitoring system. Nada. Niet. Nuffin. The dirty truth here is that the monitoring industry has really been caught napping. Today, anyone and everyone operating a production 100G network is for all intents and purposes flying blind with their fingers crossed that nothing is wrong under the covers.

In fact it's not the first time this situation has arisen; back in 2008 when 10Gbps arrived with a vengeance, most organizations only had 1Gbps monitoring infrastructure and IT ops teams faced a similar problem. The answer at that time came in the form of monitoring access switches (which Gartner tidily named Network Packet Brokers earlier this year). NPBs solved the problem overnight by ingesting 10Gbps of traffic and load balancing it out over multiple 1Gbps ports that could be connected to existing 1Gbps capable infrastructure.

Over and above traffic de-multiplexing, monitoring access switches also helps to solve a number of other issues, including traffic duplication that addresses organizations' need to have more monitoring tools than they've got access points, and traffic filtering to help manage monitoring over-subscription. For many organizations network-monitoring switches are now an end-of-row feature in all their data centers and their value in the data center is well understood and respected.

Why hasn't history repeated itself with 100G? Well, in due course it probably will, but there are a couple of fairly major issues that monitoring solution vendors have to step up and deal with first.

Firstly, 100G is a LOT more complicated than 10Gbps. Today's NPB market is based on commodity merchant silicon (re-purposed from that found in standard Ethernet switches) which is perfect for the basic task of moving 10Gbps traffic around, but is really tricky to scale to 100G. It's going to take a different architectural approach, potentially using a different technology to meet the 10X increase in throughput and it appears to be beyond the scope and capability of most, if not all of the current set of NPB vendors

Secondly, there's a fundamental problem in the 10Gbps tools market that's going to bite in a 100G world. The ugly truth is that very few, if any, of the monitoring and security tools on the market today that support 10Gbps ports are actually capable of operating at 10Gbps for any sustained period of time. For most monitoring tools, particularly those that are packet-based that include any kind of recording capability, 10Gbps performance might range from 1Gbps to 5Gbps, but rarely gets anywhere close to 10. At some point above 2Gbps, traditional software-based tools that are designed to provide visibility quite simply go blind through packet loss.

The issue of packet loss is being largely ignored by most organizations and vendors today, but will become front of mind when it comes to 100G monitoring because the types of companies that buy 100G really care and will ask the tough questions that most people have been dancing around for the last 5 years. Maybe 100G will finally force the issue of 10Gbps packet loss?

By Tim Nichols, Vice President, Marketing, Endace

 

PREVIOUS ARTICLE

« Dan Swinhoe (Middle East) - The Gulf's Web Habits & Why We Need More Arabic Content

NEXT ARTICLE

Ayesha Salim (US) - iPad Mini: A Win for Doctors and Patients? »

Recommended for You

How to (really) evaluate a developer's skillset

Adrian Bridgwater’s deconstruction & analysis of enterprise software

Unicorns are running free in the UK but Brexit poses a tough challenge

Trevor Clawson on the outlook for UK Tech startups

Cloudistics aims to trump Nutanix with 'superconvergence' play

Martin Veitch's inside track on today’s tech trends

Poll

Is your organization fully GDPR compliant?