This Industry Viewpoint was authored by Dan Joe Barry, vice president of marketing, Napatech.
Mary Meeker, the head of Morgan Stanley’s global technology research team, projected that mobile data traffic would increase by almost 4,000 percent by 2014, for a cumulative annual growth rate of more than 100 percent. Cisco’s Global Mobile Data Traffic Forecast Update, 2013–2018 says mobile video will increase fourteenfold from 2013 to 2018, accounting for 69 percent of total mobile data traffic in that time period.
Just as data traffic increases rapidly, so must connectivity speeds. High-bandwidth applications such as video on demand will continue to drive adoption of 40 Gbps and 100 Gbps connections. Data delivered in the right way creates insights that enable actions. Being able to understand all the data within networks ensures that apps run quickly, videos stream smoothly and end-user data is secure. Yet, as the volume and complexity of data increase, processing it all becomes increasingly difficult.
Foundational Principles for Operating at 100G
Providers of network equipment are tasked with increasing performance reliably at connections up to 100 Gbps while reducing risk and time-to-market. They must also effectively manage and secure networks while still handling a varied portfolio of 1, 10, 40 or even 100 Gbps products. Network services are agnostic to connection speeds and analysis will have to be performed at the same level across speeds ranging from 1 Mbps to 100 Gbps. Below is a list of best practices to ensure the network of today can move successfully into the 100G era.
Identify and Analyze Flows
Insight into activity at a single point in the network comes from analyzing individual Ethernet frames. Network applications must be able to examine the flows of frames that are transmitted between specific devices (identified by their IP addresses) or even between applications on specific devices (identified i.e. by protocol and UDP/TCP/SCTP port numbers used by the application).
In order to gain visibility into activity across the network and then control the amount of bandwidth services that are being used in high-speed networks up to 100 Gbps, it is important to identify and analyze flows of data. It also allows for intelligent flow distribution, where frames can be distributed to up to 32 CPU cores for massive parallel processing.
Reducing Data Volume
Solutions with speeds of up to 100 Gbps must ensure the delivery of real-time data with information that allows quick and easy analysis. What will distinguish these is the ability to accelerate the performance of analysis applications. This can be achieved by reducing the amount of data to analyze, ensuring that applications are not overwhelmed and only processing the frames that need to be analyzed. One of the main challenges in analyzing real-time data in high-speed networks is the sheer volume of data. Reducing this amount of data can often accelerate the performance of analysis applications. This can be accomplished through features such as frame and flow filtering, deduplication and slicing.
Frame Buffering
Full line-rate packet capture with zero packet loss, frame buffering and optimal configuration of host buffer sizes removes the bottlenecks that can cause packet loss. High-speed solutions must be able to capture network traffic at full line rate, with almost no CPU load on the host server, for all frame sizes. Full line rate also reliably delivers the analysis data that network management and security solutions demand. Zero-loss packet capture is critical for applications that need to analyze all the network traffic in real time.
To absorb data bursts and thereby ensure no data loss, frame buffering should be employed.
It can also remove application limitations, allowing frames to be transferred once the burst has passed. PCI interfaces provide a fixed bandwidth for transfer of data. This can limit the amount of data that can be transferred from the network to the application. Frame buffering is a critical feature for high-speed network analysis.
Network Insights
Frame classification provides details on the type of network protocols being used. For users who want to monitor network traffic in the most efficient way, it is important to be able to recognize as many protocols as possible, as well as extract information from layer 2-4 network traffic. Header information for the various protocols transported over Ethernet must be made available for analysis. This includes encapsulation and tunneling protocols.
Off-loading to Accelerate
High-speed solutions need to offer features that empower appliance vendors to maximize the performance of their analysis applications via acceleration. These features must off-load data processing that is normally performed by the analysis application. Some examples of off-loading features are: intelligent multi-CPU distribution, cache pre-fetch optimization, coloring, filtering and checksum verification. These free up CPU cycles, allowing more analysis to be performed faster.
Speeding Tunnel Traffic
When senders need to move data safely and reliably over networks outside of their control, they use tunnels. Tunneling provides challenges because the data to be analyzed is encapsulated in the tunnel payload and must first be extracted before analysis can be performed. This is an extra and costly data processing step. By off-loading recognition of tunnels and extraction of information from tunnels, high-speed solutions can provide a significant acceleration of performance for analysis applications.
For mobile networks this is particularly true, since all subscriber Internet traffic passes through one point in the network: the GPRS Tunneling Protocol (GTP) tunnel between the signaling and gateway serving nodes. Monitoring this interface is crucial for assuring quality of service. Next-generation solutions will open up this interface, providing visibility and insight into the contents of GTP tunnels. Analysis applications can use this capability to test, secure and optimize mobile networks and services.
Precise Time-Stamping
Being able to pinpoint the time of an event, and how much delay it caused in the network,
is important for many high-speed analysis applications. Assuring quality of time-sensitive services and transactions is often essential and requires extreme precision. In 100 Gbps networks, nanosecond precision is essential to assure reliable analysis. At 10 Gbps, an Ethernet frame can be received and transmitted every 67 nanoseconds. At 100 Gbps, this time is reduced to 6.7 nanoseconds.
For uniquely identifying when a frame is received, nanosecond precision time-stamping is essential. Precise time-stamping of each Ethernet frame allows frames to be merged in the correct order. The result is a significant acceleration of performance as Ethernet frames can now be grouped and analyzed in an order that makes sense for the application and is not restricted by hardware implementations.
What it Takes to Accelerate
Providers of network equipment need to prepare for 100 Gbps age that is now upon us. They can do this by exploring solutions to help them stay ahead of the data growth curve brought on by the explosion of mobile data traffic, cloud computing, mobility and big data analysis. Four priorities for accelerating the network to 100G are:
- A “universal” Application Programming Interface (API) that allows applications to be developed once and used with a broad range of accelerators so that combinations of different accelerators with different port speeds can be installed in the same server.
- Reliable hardware platforms for the development of 100 Gbps analysis products. A 100 Gbps accelerator, for example, can intelligently manage the data that is presented for analysis, providing extensive features for managing the type and amount of data. Slicing and filtering of frames and flows, even within GTP and IP-in-IP tunnels, significantly reduces the amount of data. Look for deduplication features that can be extended in analysis software to ensure that only the right data is being examined.
- To enable telcos to focus their development efforts on the application, not the hardware, consider PCI-SIG® compliant products that will fit into any commercial off-the-shelf server.
- To enable multiple applications running on the same server to analyze the same data, look for software suites that provide data sharing capabilities. When combined with intelligent multi-CPU distribution, this allows the right data to be presented to the right analysis application, thus sharing the load. Intelligent features for flow identification; filtering and distribution to up to 32 CPU cores accelerate application performance with extremely low CPU load.
Positioned for High Performance
With the exponential increase in mobile data traffic, new technologies are setting the stage to enable telecoms to manage the ever-increasing data loads without compromise. By scaling with increasing connectivity speeds, as well as accelerating network management and security applications, telecoms are better able to position themselves to provide the excellent performance their customers expect.
About the Author:
Daniel Joseph Barry is VP of Marketing at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Dan Joe was Marketing Director at TPACK, a leading supplier of transport chip solutions to the Telecom sector. From 2001 to 2005, he was Director of Sales and Business Development at optical component vendor NKT Integration (now IgnisPhotonyx) following various positions in product development, business development and product management at Ericsson. Dan Joe joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.
If you haven't already, please take our Reader Survey! Just 3 questions to help us better understand who is reading Telecom Ramblings so we can serve you better!
Categories: Industry Viewpoint · Internet Traffic
Discuss this Post